CN109766799B - Parking space recognition model training method and device and parking space recognition method and device - Google Patents

Parking space recognition model training method and device and parking space recognition method and device Download PDF

Info

Publication number
CN109766799B
CN109766799B CN201811621098.7A CN201811621098A CN109766799B CN 109766799 B CN109766799 B CN 109766799B CN 201811621098 A CN201811621098 A CN 201811621098A CN 109766799 B CN109766799 B CN 109766799B
Authority
CN
China
Prior art keywords
information
parking space
scene image
vehicle
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811621098.7A
Other languages
Chinese (zh)
Other versions
CN109766799A (en
Inventor
杨树
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201811621098.7A priority Critical patent/CN109766799B/en
Publication of CN109766799A publication Critical patent/CN109766799A/en
Application granted granted Critical
Publication of CN109766799B publication Critical patent/CN109766799B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The embodiment of the invention provides a parking space recognition model training method and a parking space recognition model training device, wherein the method comprises the following steps: acquiring multiple groups of sample data, wherein each group of sample data comprises: the method comprises the following steps that track information of a vehicle, feature information of a scene image and real information whether the scene image contains a parking space or not are obtained; training a parking space recognition model by adopting the multiple groups of sample data; the training mode is as follows: inputting the track information of the vehicle and the characteristic information of the scene image into the parking space identification model, comparing the real information of whether the scene image contains the parking space with the prediction information output by the parking space identification model, and adjusting the parameters of the parking space identification model according to the comparison result. The embodiment of the invention also provides a parking space identification method and a parking space identification device. The embodiment of the invention requires fewer training samples and has lower requirements on computing power.

Description

Parking space recognition model training method and device and parking space recognition method and device
Technical Field
The invention relates to the technical field of images, in particular to a parking space recognition model training method and device and a parking space recognition method and device.
Background
The existing parking space identification methods generally comprise the following steps:
first, adopt predefined vision operator discernment parking stall. The method has higher requirement on the definition of the scene image, and the definition of the scene image shot under the conditions of over-strong light, over-weak light, heavy rain and the like is lower, so that the accuracy of parking space identification is lower.
Secondly, the parking space is identified by a Global Positioning System (GPS) Positioning method. This approach cannot be used without GPS signals or with GPS signals that are weak.
And thirdly, recognizing the parking space by adopting a deep learning model. For the mode, a plurality of groups of scene images containing parking spaces and not containing parking spaces are adopted in advance to train the deep learning module. And during identification, inputting the scene image into the trained deep learning model, and outputting the information whether the scene image contains the parking space or not by the deep learning module.
The third method is more accurate and stable than the first two methods, but has the disadvantages that a large amount of scene images are required for training the deep learning model, and the training and recognition processes require high computing power.
Disclosure of Invention
The embodiment of the invention provides a parking space recognition model training method and device and a parking space recognition method and device, which are used for at least solving the technical problems in the prior art.
In a first aspect, an embodiment of the present invention provides a parking space recognition model training method, including:
acquiring multiple groups of sample data, wherein each group of sample data comprises: the method comprises the following steps that track information of a vehicle, feature information of a scene image and real information whether the scene image contains a parking space or not are obtained;
training a parking space recognition model by adopting the multiple groups of sample data; the training mode is as follows: inputting the track information of the vehicle and the characteristic information of the scene image into the parking space identification model, comparing the real information of whether the scene image contains the parking space with the prediction information output by the parking space identification model, and adjusting the parameters of the parking space identification model according to the comparison result.
In one embodiment, the track information of the vehicle is acquired by:
and acquiring the track information of the vehicle by using the global positioning system information and/or the inertial measurement unit information of the vehicle in a preset time period.
In one embodiment, the obtaining of the feature information of the scene image includes:
and processing the scene image by adopting a preset visual operator to acquire the characteristic information of the scene image.
In one embodiment, the parking space recognition model is a deep learning model.
In a second aspect, an embodiment of the present invention provides a parking space identification method, including:
acquiring track information of a vehicle and characteristic information of a scene image;
inputting the track information of the vehicle and the characteristic information of the scene image into a parking space identification model, and outputting the prediction information of whether the scene image contains a parking space or not by the parking space identification model;
the parking space identification model is obtained by training a plurality of groups of sample data, and each group of sample data comprises: the parking space information processing method comprises the following steps of obtaining track information of a vehicle, feature information of a scene image and real information of whether a parking space is contained in the scene image.
In one embodiment, the trajectory information of the vehicle is acquired by:
and acquiring the track information of the vehicle by using the global positioning system information and/or the inertial measurement unit information of the vehicle in a preset time period.
In one embodiment, the feature information of the scene image is obtained by:
and processing the scene image by adopting a preset visual operator to acquire the characteristic information of the scene image.
In a third aspect, an embodiment of the present invention provides a parking space recognition model training device, including:
the sample acquisition module is used for acquiring a plurality of groups of sample data, and each group of sample data comprises: the method comprises the following steps that track information of a vehicle, feature information of a scene image and real information whether the scene image contains a parking space or not are obtained;
the training module is used for training a parking space identification model by adopting the multiple groups of sample data; the training mode is as follows: inputting the track information of the vehicle and the characteristic information of the scene image into the parking space identification model, comparing the real information of whether the scene image contains the parking space with the prediction information output by the parking space identification model, and adjusting the parameters of the parking space identification model according to the comparison result.
In one embodiment, the sample acquisition module comprises a trajectory information acquisition sub-module;
the track information acquisition submodule is used for acquiring track information of the vehicle by utilizing global positioning system information and/or inertial measurement unit information of the vehicle in a preset time period.
In one embodiment, the sample acquisition module comprises a characteristic information acquisition sub-module;
the characteristic information acquisition submodule is used for processing the scene image by adopting a preset visual operator to acquire the characteristic information of the scene image.
In one embodiment, the parking space recognition model trained by the training module is a deep learning model.
In a fourth aspect, an embodiment of the present invention provides a parking space recognition apparatus, including:
the information acquisition module is used for acquiring track information of the vehicle and characteristic information of the scene image;
the input module is used for inputting the track information of the vehicle and the characteristic information of the scene image into a parking space identification model, and the parking space identification model outputs the prediction information of whether the scene image contains a parking space;
the parking space identification model is obtained by training a plurality of groups of sample data, and each group of sample data comprises: the parking space information processing method comprises the following steps of obtaining track information of a vehicle, feature information of a scene image and real information of whether a parking space is contained in the scene image.
In one embodiment, the information acquisition module comprises a first acquisition submodule;
the first obtaining submodule is used for obtaining the track information of the vehicle by utilizing the global positioning system information and/or the inertial measurement unit information of the vehicle in a preset time period.
In one embodiment, the information acquisition module comprises a second acquisition submodule;
and the second obtaining submodule is used for processing the scene image by adopting a preset visual operator to obtain the characteristic information of the scene image.
In a fifth aspect, an embodiment of the present invention provides a parking space recognition model training device, where functions of the device may be implemented by hardware, or may be implemented by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the above-described functions.
In one possible design, the structure of the device includes a processor and a memory, the memory is used for storing a program for supporting the device to execute the above-mentioned stall recognition model training method, and the processor is configured to execute the program stored in the memory. The device may also include a communication interface for communicating with other devices or a communication network.
In a sixth aspect, an embodiment of the present invention provides a parking space recognition device, where functions of the device may be implemented by hardware, or may be implemented by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the above-described functions.
In one possible design, the structure of the device includes a processor and a memory, the memory is used for storing a program for supporting the device to execute the above-mentioned stall identification method, and the processor is configured to execute the program stored in the memory. The device may also include a communication interface for communicating with other devices or a communication network.
In a seventh aspect, an embodiment of the present invention provides a computer-readable storage medium for storing computer software instructions for the above apparatus, which includes a program for executing the above method.
One of the above technical solutions has the following advantages or beneficial effects:
the parking space recognition model training method and the parking space recognition model training device provided by the embodiment of the invention adopt the track information of the vehicle, the characteristic information of the scene image and the real information of whether the scene image contains the parking space or not to train the parking space recognition model. Because the parking space recognition model is trained without depending on the single information of the unprocessed original scene image, fewer training samples are required; and the performance requirement on a single module in the parking space identification model is reduced, so that the requirement on the computing capacity is not high.
In addition, the parking space identification method and the parking space identification device provided by the embodiment of the invention input the track information of the vehicle and the characteristic information of the scene image into the parking space identification model, and the parking space identification model outputs the prediction information of whether the scene image contains the parking space. Because the parking space is identified without depending on the independent information of the original scene image, the performance requirement on a single module in the parking space identification model is reduced, and the requirement on the computing capacity is not high.
The foregoing summary is provided for the purpose of description only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features of the present invention will be readily apparent by reference to the drawings and following detailed description.
Drawings
In the drawings, like reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily to scale. It is appreciated that these drawings depict only some embodiments in accordance with the disclosure and are therefore not to be considered limiting of its scope.
FIG. 1 is a flowchart illustrating an implementation of a parking space recognition model training method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for identifying a parking space according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a parking space recognition model training device according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of another parking space recognition model training device according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a parking space recognition device according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of another parking space recognition device according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a parking space recognition model training device according to an embodiment of the present invention.
Detailed Description
In the following, only certain exemplary embodiments are briefly described. As those skilled in the art will recognize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present invention. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
The embodiment of the invention mainly provides a parking space recognition model training method and device and a parking space recognition method and device, and the technical scheme is developed and described through the following embodiments respectively.
Fig. 1 is a flowchart of an implementation of a parking space recognition model training method according to an embodiment of the present invention, including:
s11: acquiring multiple groups of sample data, wherein each group of sample data comprises: the method comprises the following steps that track information of a vehicle, feature information of a scene image and real information whether the scene image contains a parking space or not are obtained;
s12: training a parking space recognition model by adopting the multiple groups of sample data;
wherein, the training mode can be as follows: inputting the track information of the vehicle and the characteristic information of the scene image into the parking space identification model, comparing the real information of whether the scene image contains the parking space with the prediction information output by the parking space identification model, and adjusting the parameters of the parking space identification model according to the comparison result.
In one possible embodiment, in step S11, the obtaining of the trajectory information of the vehicle includes:
acquiring the track information of the vehicle by using GPS information and/or Inertial Measurement Unit (IMU) information of the vehicle in a preset time period.
Wherein the GPS information may record the location of the vehicle. According to the GPS information of the vehicle at a plurality of moments in a preset time period, the position of the vehicle at the plurality of moments can be determined; the track information of the vehicle can be obtained by connecting the positions at a plurality of times. Or, every time the vehicle travels a fixed length (e.g., 1 meter), the position of the vehicle is recorded; and connecting all recorded positions to obtain the track information of the vehicle.
The IMU information may record motion information of the vehicle, including acceleration, angular velocity, and the like. According to the acceleration and the angular speed of the vehicle at a plurality of moments in a preset time period, the motion track of the vehicle relative to the starting position in the preset time period can be calculated; and the track information of the vehicle can be determined by combining the initial position information.
In one example, if the time corresponding to the scene image is T0, the predetermined time period may be T0-T, T0, where T is the length of the predetermined time period. As can be seen, the predetermined period of time may be a period of time before the time corresponding to the scene image.
The length of the predetermined period of time may be set according to specific circumstances. For example, when a space recognition model for a certain parking lot is trained, the length of the predetermined period of time may be determined from an average of the time required for a plurality of vehicles to enter the parking lot until a space is found. The larger the average value, the longer the length of the predetermined period.
In one possible implementation, in step S11, the obtaining manner of the feature information of the scene image may include: and processing the scene image by adopting a preset visual operator to acquire the characteristic information of the scene image.
In a possible implementation manner, a preset gradient operator may be used to process a scene image, so as to obtain edge information of an object in the scene image, which is used as feature information of the scene image.
The scene image may be acquired by a camera mounted on the vehicle. The static scene image can be directly acquired by the shooting device; alternatively, a video image including a scene image may be acquired by the camera, and a still scene image may be acquired from the video image.
In a possible implementation manner, in step S11, there may be two values for the real information of whether the parking space is included in the scene image: if yes, namely the scene image comprises the parking space; or no, that is, the scene image does not contain the parking space. The parking space can be a free parking space which is not occupied by other vehicles. The scene image can be observed manually, and whether the scene image contains real information of the parking space or not can be obtained.
The multiple sets of sample data for training the parking space recognition model may include positive samples and negative samples. Wherein the positive sample includes: the method comprises the following steps that track information of a vehicle, feature information of a scene image and real information of a parking space contained in the scene image are obtained; the negative examples include: the vehicle track information, the characteristic information of the scene image and the real information of the parking space are not contained in the scene image.
It should be noted that: for a specific parking place, the shape, entrance and other positions of the parking place are fixed, so that the track information of a vehicle in a distance from entering the parking place to finding a parking space is generally regular. In view of this, the embodiment of the present invention uses the trajectory information of the vehicle as the content included in the sample data for training the parking space recognition model.
In a possible implementation manner, the parking space recognition module may be a deep learning model. When the parking space recognition module is trained, inputting the track information of the vehicle and the characteristic information of the scene image into the parking space recognition module, and outputting the prediction information whether the scene image contains the parking space or not by the parking space recognition module; and then, comparing the prediction information with the real information of whether the scene image contains the parking space, and adjusting the parameter setting of the parking space identification model when the comparison result is inconsistent. The parameters are adjusted repeatedly by adopting the mode, so that the accuracy of the trained parking space recognition model for recognizing the parking space reaches the preset requirement.
Correspondingly, an embodiment of the present invention provides a parking space identification method, and as shown in fig. 2, an implementation flowchart of the method includes:
s21: acquiring track information of a vehicle and characteristic information of a scene image;
s22: inputting the track information of the vehicle and the characteristic information of the scene image into a parking space identification model, and outputting the prediction information of whether the scene image contains a parking space or not by the parking space identification model;
the parking space identification model is obtained by training a plurality of groups of sample data, and each group of sample data comprises: the parking space information processing method comprises the following steps of obtaining track information of a vehicle, feature information of a scene image and real information of whether a parking space is contained in the scene image.
In one possible embodiment, the track information of the vehicle in step S21 is obtained by:
and acquiring the track information of the vehicle by using the global positioning system information and/or the inertial measurement unit information of the vehicle in a preset time period.
The setting manner of the predetermined time period may be the same as that in the above embodiment, and is not described herein again.
In one possible implementation, the feature information of the scene image in step S21 is obtained by:
and processing the scene image by adopting a preset visual operator to acquire the characteristic information of the scene image.
The scene image can be processed by adopting a preset gradient operator to obtain edge information of an object in the scene image, and the edge information is used as the characteristic information of the scene image.
The manner of acquiring the scene image may be the same as that in the above embodiment, and is not described herein again.
In a possible implementation manner, the scene image output by the parking space recognition model in step S22 may include two values of the predicted information of whether the parking space is included in the parking space: if yes, namely the scene image comprises the parking space; or no, that is, the scene image does not contain the parking space.
Alternatively, in a possible implementation, the prediction information of whether the parking space is included in the scene image output by the parking space recognition model in step S22 may be a numerical value indicating the possibility that the parking space is included in the scene image. For example, the value range of the value may be [0,1], and the larger the value is, the higher the possibility that the scene image includes the parking space is.
The aforementioned parking spaces may include vacant parking spaces not occupied by other vehicles.
The embodiment of the invention also provides a parking space recognition model training device. Referring to fig. 3, fig. 3 is a schematic structural diagram of a parking space recognition model training device according to an embodiment of the present invention, including:
a sample obtaining module 310, configured to obtain multiple sets of sample data, where each set of sample data includes: the method comprises the following steps that track information of a vehicle, feature information of a scene image and real information whether the scene image contains a parking space or not are obtained;
the training module 320 is used for training a parking space identification model by adopting the multiple groups of sample data; the training mode is as follows: inputting the track information of the vehicle and the characteristic information of the scene image into the parking space identification model, comparing the real information of whether the scene image contains the parking space with the prediction information output by the parking space identification model, and adjusting the parameters of the parking space identification model according to the comparison result.
The embodiment of the present invention further provides another parking space recognition model training device, and as shown in fig. 4, the device is a schematic structural diagram, including:
a sample obtaining module 310, configured to obtain multiple sets of sample data, where each set of sample data includes: the method comprises the following steps that track information of a vehicle, feature information of a scene image and real information whether the scene image contains a parking space or not are obtained;
the training module 320 is used for training a parking space identification model by adopting the multiple groups of sample data; the training mode is as follows: inputting the track information of the vehicle and the characteristic information of the scene image into the parking space identification model, comparing the real information of whether the scene image contains the parking space with the prediction information output by the parking space identification model, and adjusting the parameters of the parking space identification model according to the comparison result.
The sample acquiring module 310 may include:
and the track information acquisition submodule 311 is configured to acquire track information of the vehicle by using global positioning system information and/or inertial measurement unit information of the vehicle within a predetermined time period.
And the characteristic information obtaining sub-module 312 is configured to process the scene image by using a preset visual operator, and obtain characteristic information of the scene image.
In one possible embodiment, the parking space recognition model trained by the training module 320 is a deep learning model.
An embodiment of the present invention further provides a parking space recognition apparatus, and as shown in fig. 5, the parking space recognition apparatus provided in the embodiment of the present invention is schematically configured, and the apparatus includes:
an information obtaining module 510, configured to obtain track information of a vehicle and feature information of a scene image;
an input module 520, configured to input trajectory information of the vehicle and feature information of a scene image into a parking space recognition model, and output, by the parking space recognition model, prediction information of whether a parking space is included in the scene image;
the parking space identification model is obtained by training a plurality of groups of sample data, and each group of sample data comprises: the parking space information processing method comprises the following steps of obtaining track information of a vehicle, feature information of a scene image and real information of whether a parking space is contained in the scene image.
The embodiment of the present invention further provides another parking space recognition apparatus, as shown in fig. 6, the apparatus includes:
the information obtaining module 510 is configured to obtain track information of the vehicle and feature information of the scene image.
An input module 520, configured to input the trajectory information of the vehicle and the feature information of the scene image into a parking space recognition model, and output, by the parking space recognition model, prediction information of whether a parking space is included in the scene image.
The information obtaining module 510 includes:
the first obtaining sub-module 511 is configured to obtain track information of the vehicle by using global positioning system information and/or inertial measurement unit information of the vehicle within a predetermined time period.
And the second obtaining sub-module 512 is configured to process the scene image by using a preset visual operator, and obtain feature information of the scene image.
The functions of each module in each apparatus in the embodiments of the present invention may refer to the corresponding description in the above method, and are not described herein again.
An embodiment of the present invention further provides a parking space recognition model training device and a parking space recognition device, and as shown in fig. 7, the parking space recognition model training device according to the embodiment of the present invention includes:
a memory 11 and a processor 12, the memory 11 storing a computer program operable on the processor 12. The processor 12, when executing the computer program, implements the method in the above embodiments. The number of the memory 11 and the processor 12 may be one or more.
The apparatus may further include:
and the communication interface 13 is used for communicating with external equipment and exchanging and transmitting data.
The memory 11 may comprise a high-speed RAM memory, and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
If the memory 11, the processor 12 and the communication interface 13 are implemented independently, the memory 11, the processor 12 and the communication interface 13 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA), or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 7, and does not indicate only one bus or one type of bus.
Optionally, in a specific implementation, if the memory 11, the processor 12 and the communication interface 13 are integrated on a chip, the memory 11, the processor 12 and the communication interface 13 may complete communication with each other through an internal interface.
The structure of the parking space recognition device provided by the embodiment of the invention is the same as that shown in fig. 7, and is not repeated.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may also be stored in a computer readable storage medium. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
In summary, in the embodiments of the present invention, the parking space recognition model is trained by using the trajectory information of the vehicle, the feature information of the scene image, and the real information of whether the scene image includes the parking space. During recognition, the track information of the vehicle and the characteristic information of the scene image are input into the trained parking space recognition model, and the parking space recognition model outputs the prediction information of whether the scene image contains the parking space. Because the parking space recognition model is not trained by relying on the independent information of the unprocessed original scene image and the parking space recognition model is also not recognized by relying on the independent information of the original scene image, the performance requirement on a single module in the parking space recognition model is reduced, and the requirement on the computing capacity is not high.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive various changes or substitutions within the technical scope of the present invention, and these should be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (17)

1. The utility model provides a parking stall recognition model training method which characterized in that includes:
acquiring multiple groups of sample data, wherein each group of sample data comprises: the method comprises the following steps that track information of a vehicle, feature information of a scene image and real information whether the scene image contains a parking space or not are obtained;
training a parking space recognition model by adopting the multiple groups of sample data; the training mode is as follows: inputting the track information of the vehicle and the characteristic information of the scene image into the parking space identification model, comparing the real information of whether the scene image contains the parking space with the prediction information output by the parking space identification model, and adjusting the parameters of the parking space identification model according to the comparison result.
2. The method of claim 1, wherein the trajectory information of the vehicle is obtained in a manner comprising:
and acquiring the track information of the vehicle by using the global positioning system information and/or the inertial measurement unit information of the vehicle in a preset time period.
3. The method of claim 1, wherein the obtaining of the feature information of the scene image comprises:
and processing the scene image by adopting a preset visual operator to acquire the characteristic information of the scene image.
4. The method of any one of claims 1 to 3, wherein the stall identification model is a deep learning model.
5. A parking space identification method is characterized by comprising the following steps:
acquiring track information of a vehicle and characteristic information of a scene image;
inputting the track information of the vehicle and the characteristic information of the scene image into a parking space identification model, and outputting the prediction information of whether the scene image contains a parking space or not by the parking space identification model;
the parking space identification model is obtained by training a plurality of groups of sample data, and each group of sample data comprises: the parking space information processing method comprises the following steps of obtaining track information of a vehicle, feature information of a scene image and real information of whether a parking space is contained in the scene image.
6. The method of claim 5, wherein the trajectory information of the vehicle is obtained by:
and acquiring the track information of the vehicle by using the global positioning system information and/or the inertial measurement unit information of the vehicle in a preset time period.
7. The method of claim 5, wherein the feature information of the scene image is obtained by:
and processing the scene image by adopting a preset visual operator to acquire the characteristic information of the scene image.
8. The utility model provides a parking stall discernment model trainer, its characterized in that includes:
the sample acquisition module is used for acquiring a plurality of groups of sample data, and each group of sample data comprises: the method comprises the following steps that track information of a vehicle, feature information of a scene image and real information whether the scene image contains a parking space or not are obtained;
the training module is used for training a parking space identification model by adopting the multiple groups of sample data; the training mode is as follows: inputting the track information of the vehicle and the characteristic information of the scene image into the parking space identification model, comparing the real information of whether the scene image contains the parking space with the prediction information output by the parking space identification model, and adjusting the parameters of the parking space identification model according to the comparison result.
9. The apparatus of claim 8, wherein the sample acquisition module comprises a trajectory information acquisition sub-module;
the track information acquisition submodule is used for acquiring track information of the vehicle by utilizing global positioning system information and/or inertial measurement unit information of the vehicle in a preset time period.
10. The apparatus of claim 8, wherein the sample acquisition module comprises a feature information acquisition sub-module;
the characteristic information acquisition submodule is used for processing the scene image by adopting a preset visual operator to acquire the characteristic information of the scene image.
11. The apparatus according to any one of claims 8 to 10, wherein the stall recognition model trained by the training module is a deep learning model.
12. The utility model provides a parking stall recognition device which characterized in that includes:
the information acquisition module is used for acquiring track information of the vehicle and characteristic information of the scene image;
the input module is used for inputting the track information of the vehicle and the characteristic information of the scene image into a parking space identification model, and the parking space identification model outputs the prediction information of whether the scene image contains a parking space;
the parking space identification model is obtained by training a plurality of groups of sample data, and each group of sample data comprises: the parking space information processing method comprises the following steps of obtaining track information of a vehicle, feature information of a scene image and real information of whether a parking space is contained in the scene image.
13. The apparatus of claim 12, wherein the information obtaining module comprises a first obtaining sub-module;
the first obtaining submodule is used for obtaining the track information of the vehicle by utilizing the global positioning system information and/or the inertial measurement unit information of the vehicle in a preset time period.
14. The apparatus of claim 12, wherein the information obtaining module comprises a second obtaining sub-module;
and the second obtaining submodule is used for processing the scene image by adopting a preset visual operator to obtain the characteristic information of the scene image.
15. The utility model provides a parking stall discernment model training equipment which characterized in that, equipment includes:
one or more processors;
storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method recited in any of claims 1-4.
16. The utility model provides a parking stall identification equipment which characterized in that, equipment includes:
one or more processors;
storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 5-7.
17. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN201811621098.7A 2018-12-28 2018-12-28 Parking space recognition model training method and device and parking space recognition method and device Active CN109766799B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811621098.7A CN109766799B (en) 2018-12-28 2018-12-28 Parking space recognition model training method and device and parking space recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811621098.7A CN109766799B (en) 2018-12-28 2018-12-28 Parking space recognition model training method and device and parking space recognition method and device

Publications (2)

Publication Number Publication Date
CN109766799A CN109766799A (en) 2019-05-17
CN109766799B true CN109766799B (en) 2021-02-12

Family

ID=66451775

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811621098.7A Active CN109766799B (en) 2018-12-28 2018-12-28 Parking space recognition model training method and device and parking space recognition method and device

Country Status (1)

Country Link
CN (1) CN109766799B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115083199B (en) * 2021-03-12 2024-02-27 上海汽车集团股份有限公司 Parking space information determining method and related equipment thereof
CN113537163B (en) * 2021-09-15 2021-12-28 苏州魔视智能科技有限公司 Model training method and system for parking space detection

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106611510A (en) * 2015-10-27 2017-05-03 富士通株式会社 Parking stall detecting device and method and electronic equipment
CN106652551A (en) * 2016-12-16 2017-05-10 浙江宇视科技有限公司 Parking stall detection method and device
CN107067796A (en) * 2016-12-28 2017-08-18 深圳市金溢科技股份有限公司 A kind of parking management server, method and system
CN107591005A (en) * 2017-08-09 2018-01-16 深圳市金溢科技股份有限公司 Parking area management method, server and the system that dynamic Static Detection is combined
CN107610499A (en) * 2016-07-11 2018-01-19 富士通株式会社 Detection method, detection means and the electronic equipment of parking stall state
CN108460983A (en) * 2017-02-19 2018-08-28 泓图睿语(北京)科技有限公司 Parking stall condition detection method based on convolutional neural networks
CN108550277A (en) * 2018-06-04 2018-09-18 济南浪潮高新科技投资发展有限公司 A kind of parking stall identification and querying method based on picture depth study
CN108766022A (en) * 2018-06-11 2018-11-06 青岛串并联电子科技有限公司 Parking position state identification method based on machine learning and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102450374B1 (en) * 2016-11-17 2022-10-04 삼성전자주식회사 Method and device to train and recognize data

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106611510A (en) * 2015-10-27 2017-05-03 富士通株式会社 Parking stall detecting device and method and electronic equipment
CN107610499A (en) * 2016-07-11 2018-01-19 富士通株式会社 Detection method, detection means and the electronic equipment of parking stall state
CN106652551A (en) * 2016-12-16 2017-05-10 浙江宇视科技有限公司 Parking stall detection method and device
CN107067796A (en) * 2016-12-28 2017-08-18 深圳市金溢科技股份有限公司 A kind of parking management server, method and system
CN108460983A (en) * 2017-02-19 2018-08-28 泓图睿语(北京)科技有限公司 Parking stall condition detection method based on convolutional neural networks
CN107591005A (en) * 2017-08-09 2018-01-16 深圳市金溢科技股份有限公司 Parking area management method, server and the system that dynamic Static Detection is combined
CN108550277A (en) * 2018-06-04 2018-09-18 济南浪潮高新科技投资发展有限公司 A kind of parking stall identification and querying method based on picture depth study
CN108766022A (en) * 2018-06-11 2018-11-06 青岛串并联电子科技有限公司 Parking position state identification method based on machine learning and system

Also Published As

Publication number Publication date
CN109766799A (en) 2019-05-17

Similar Documents

Publication Publication Date Title
CN109343061B (en) Sensor calibration method and device, computer equipment, medium and vehicle
CN106952303B (en) Vehicle distance detection method, device and system
CN111027381A (en) Method, device, equipment and storage medium for recognizing obstacle by monocular camera
CN109766799B (en) Parking space recognition model training method and device and parking space recognition method and device
CN112084810A (en) Obstacle detection method and device, electronic equipment and storage medium
CN109961509B (en) Three-dimensional map generation and model training method and device and electronic equipment
CN111199087A (en) Scene recognition method and device
CN111982132B (en) Data processing method, device and storage medium
CN108986253B (en) Method and apparatus for storing data for multi-thread parallel processing
CN112455465B (en) Driving environment sensing method and device, electronic equipment and storage medium
CN113246858A (en) Vehicle driving state image generation method, device and system
CN110414374B (en) Method, device, equipment and medium for determining obstacle position and attitude
CN117079238A (en) Road edge detection method, device, equipment and storage medium
CN113409393B (en) Method and device for identifying traffic sign
CN111126336B (en) Sample collection method, device and equipment
CN111192327A (en) Method and apparatus for determining obstacle orientation
CN111126154A (en) Method and device for identifying road surface element, unmanned equipment and storage medium
CN116363628A (en) Mark detection method and device, nonvolatile storage medium and computer equipment
CN110189372A (en) Depth map model training method and device
CN115909271A (en) Parking space identification method and device, vehicle and storage medium
CN113298099B (en) Driving behavior recognition method and device, electronic equipment and storage medium
CN111177878A (en) Method, device and terminal for screening derivative simulation scenes
CN113869440A (en) Image processing method, apparatus, device, medium, and program product
CN113727064B (en) Method and device for determining camera field angle
CN114202574A (en) Positioning reliability detection method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant