WO2023005808A1 - 停车系统中调整摄像头的方法和装置 - Google Patents

停车系统中调整摄像头的方法和装置 Download PDF

Info

Publication number
WO2023005808A1
WO2023005808A1 PCT/CN2022/107188 CN2022107188W WO2023005808A1 WO 2023005808 A1 WO2023005808 A1 WO 2023005808A1 CN 2022107188 W CN2022107188 W CN 2022107188W WO 2023005808 A1 WO2023005808 A1 WO 2023005808A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
camera
identification
picture
vehicles
Prior art date
Application number
PCT/CN2022/107188
Other languages
English (en)
French (fr)
Inventor
董一鸿
Original Assignee
西门子(中国)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 西门子(中国)有限公司 filed Critical 西门子(中国)有限公司
Publication of WO2023005808A1 publication Critical patent/WO2023005808A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07BTICKET-ISSUING APPARATUS; FARE-REGISTERING APPARATUS; FRANKING APPARATUS
    • G07B15/00Arrangements or apparatus for collecting fares, tolls or entrance fees at one or more control points
    • G07B15/02Arrangements or apparatus for collecting fares, tolls or entrance fees at one or more control points taking into account a variable factor such as distance or time, e.g. for passenger transport, parking systems or car rental systems
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles

Definitions

  • the present application relates to the field of artificial intelligence, and more specifically, relates to a method and device for adjusting a camera in a parking system.
  • the present application provides a method and device for adjusting a camera in a parking system, which can accurately and effectively install the camera in the parking system.
  • a method for adjusting a camera in a parking system comprising: acquiring a first vehicle picture captured by the camera, the first vehicle picture including pictures of a plurality of vehicles on a target parking area; according to the The first vehicle picture and the vehicle area identification model are used to obtain an identification picture after identifying the plurality of vehicles, the identification picture identifies the identification area of each vehicle in the plurality of vehicles, and the identification picture includes all The identification of the identification area of each vehicle, the identification is used to indicate the degree of overlap between the identification areas of the plurality of vehicles; sending the identification picture, wherein the overlap between the identification areas of the plurality of vehicles The degree is used to adjust the parameters of the camera.
  • a recognition picture that identifies the recognition area of each vehicle is obtained, and the recognition picture includes an indication of the degree of overlap between the recognition areas of the vehicle , and send a picture of that identification.
  • the hardware engineer can adjust the camera parameters in real time and purposefully based on the overlapping degree of the recognition area of each vehicle, which not only greatly reduces the time for adjusting the camera, but also accurately and effectively adjusts the camera parameters.
  • the installation, in this way, is conducive to improving the accuracy of identifying vehicle information.
  • the overlapping degree of the identification area of the vehicle is used to adjust the camera without complex calculations, which can improve the processing speed , and can be applied to more parking systems.
  • the identification includes first indication information and/or second indication information
  • the first indication information is used to indicate that the overlap between the recognition areas of the plurality of vehicles is higher than or equal to An identification area of a threshold value
  • the second indication information is used to indicate an identification area whose overlapping degree among the identification areas of the plurality of vehicles is lower than the threshold value.
  • the identifications whose overlapping degree of the recognition area is higher than or equal to the threshold and the identifications whose overlapping degree is lower than the threshold are set to be different, so that after receiving the recognition picture, the hardware engineer can identify the identification area with a higher overlapping degree and the overlapping degree
  • the lower recognition area is clear at a glance, which can avoid the problem that hardware engineers mistake the recognition area with high overlap for low overlap when adjusting the camera, or mistake the recognition area with low overlap for high overlap, which can improve Adjust the efficiency of the camera.
  • the color indicated by the first indication information is different from the color indicated by the second indication information.
  • the color indicated by the first indication information and the color indicated by the second indication information are set to be different, which can reflect the difference between the identification areas with higher overlapping degree and lower overlapping degree in a more intuitive way.
  • the recognition picture further identifies an overlap value between at least two recognition regions whose overlap between recognition regions of the plurality of vehicles is higher than or equal to a threshold.
  • the recognition picture identifies the overlap value between at least two recognition regions whose overlap degree is higher than or equal to the threshold value, so that the hardware engineer can more clearly and quickly understand the degree of overlap between the recognition regions in the recognition picture, thereby The parameters of the camera can be adjusted purposefully and in a targeted manner.
  • the identification picture further includes identification information and third indication information of each of the plurality of vehicles, and the third indication information includes that the overlapping degree between identification areas is higher than or equal to The identification information of at least two vehicles of the threshold value and the overlap value of the at least two vehicles. In this way, the complexity of the algorithm can be reduced.
  • the parameters of the camera include at least one of the following parameters: the height of the camera, the side angle between the main line of sight of the camera and the horizontal line, and the main line of sight and vertical angle of the camera. The depression angle between the lines.
  • the method further includes: according to the second vehicle picture captured by the camera with adjusted parameters, identifying the license plate information of the vehicle in the second vehicle picture.
  • a device for adjusting a camera in a parking system which is characterized in that it includes various units for performing the method in the above-mentioned first aspect or various implementations thereof.
  • a device for adjusting a camera in a parking system including: a memory for storing programs; a processor for executing the programs stored in the memory, and when the programs stored in the memory are executed, the The processor is configured to execute the method in the first aspect or various implementations thereof.
  • the present application also provides a computer-readable storage medium, which stores program code for execution by a device, and the program code includes steps for executing the method in the above-mentioned first aspect or its various implementations. instruction.
  • the present application also provides a computer program product, the computer program product includes a computer program stored on a computer-readable storage medium, the computer program includes program instructions, and when the program instructions are executed by a computer , causing the computer to execute the method in the above first aspect or various implementations thereof.
  • Fig. 1 is a schematic diagram of a parking system according to an embodiment of the present application.
  • Fig. 2 is a schematic flowchart of a method for adjusting a camera in a parking system according to an embodiment of the present application.
  • Fig. 3 is a schematic diagram of an identification picture according to an embodiment of the present application.
  • Fig. 4 is a schematic diagram of another recognition picture implemented in the present application.
  • Fig. 5 is a schematic diagram of another parking system according to the embodiment of the present application.
  • Fig. 6 is a schematic block diagram of a device for adjusting a camera in a parking system according to an embodiment of the present application.
  • Fig. 7 is a schematic block diagram of a device for adjusting a camera in a parking system according to another embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a device for adjusting a camera in a parking system according to an embodiment of the present application.
  • the recognition picture According to the first vehicle picture and the vehicle region recognition model, obtain a recognition picture after recognizing multiple vehicles, the recognition picture identifies the recognition region of each vehicle in the plurality of vehicles, and the recognition picture includes the recognition region of each vehicle , which is used to indicate the degree of overlap between the identification areas of multiple vehicles;
  • the device for adjusting the camera in the parking system
  • processing unit 620, processing unit
  • An identification unit An identification unit
  • the device for adjusting the camera in the parking system
  • serial numbers of the processes do not mean the order of execution, and the execution order of the processes should be determined by their functions and internal logic, rather than by the implementation order of the embodiments of the present application.
  • the implementation process constitutes no limitation.
  • Temporary roadside parking spaces refer to the parking spaces set up on both sides of the road, without fixed exits and entrances, and are generally charged and managed manually.
  • this parking management method not only has high labor costs and low efficiency, but also has a long roadside temporary parking section, and the toll collectors cannot manage all the parking spaces, resulting in the problem that parking fees cannot be collected in time.
  • Fig. 1 shows a schematic diagram of a parking system using a camera for parking management.
  • the parking system may include a camera 110 and a parking management module 120
  • the parking management module 120 may include a transmission device 121 , an identification device 122 and a charging management device 123 .
  • the camera 110 can be installed on a street light pole or a building on the side of the road, where a plurality of parking spaces can be included in the field of view of one camera, as shown in FIG. 1 , four parking spaces can be included in the field of view of the camera 110. bit. If there is a vehicle parked within the field of view of the camera 110, the camera 110 can capture a vehicle picture of the vehicle.
  • the camera 110 can send the captured vehicle picture to the transmission device 121 in the parking management module 120 , for example, the camera 110 can send the vehicle picture to the transmission device 121 through a local area network.
  • the transmission device 121 may be a wired transmission device or a wireless transmission device.
  • the recognition device 122 is used for receiving the vehicle picture sent by the transmission device 121, and determining the vehicle information based on the received vehicle picture.
  • the vehicle information may include but not limited to the license plate number, vehicle color, vehicle size and so on.
  • the identification device 122 sends the determined vehicle information to the charging management device 123, and the charging management device 123 determines the parking time of the vehicle based on the vehicle information to generate charging data based on the parking time and charging rules. In this way, when the user leaves the parking space, he can complete the payment by scanning the QR code on the parking space. Alternatively, the charging management device 123 may push the charging data to the user's mobile phone through the parking management APP.
  • the identification device 122 can also determine the occupancy status of the parking spaces within the coverage of the camera 110 based on the received vehicle pictures, for example, three of the four parking spaces in Figure 1 have been used and one is free. And the recognition device 122 can inform the user of the occupied parking space of the location through the parking management application program (application, APP) or other methods, so as to realize the purpose of area parking space guidance.
  • application application, APP
  • FIG. 1 is only a schematic diagram of a parking system provided by an embodiment of the present application, and the positional relationship among devices, devices, modules, etc. shown in the figure does not constitute any limitation.
  • the accuracy of the parking system for parking management depends on the installation position of the camera. For example, if the side angle between the camera and the parking space is too small, it will usually cause the vehicle pictures of two adjacent vehicles captured by the camera to be highly overlapped, so that the parking management module cannot identify the distance between the two adjacent vehicles based on the vehicle pictures. License plate information, it is impossible to charge for parking based on license plate information. If the side angle between the camera and the parking space is too large, although it is easier for the parking management module to recognize the license plate information of adjacent vehicles, the too large side angle may bring additional difficulties to the license plate recognition of vehicles that are closer to the camera.
  • the embodiment of the present application proposes a method for adjusting the camera in the parking system. After the camera is pre-installed, the vehicle picture is processed according to the vehicle picture captured by the pre-installed camera, so that the hardware engineer can The vehicle picture is adjusted in real time to the pre-installed camera, so as to achieve the purpose of accurately and effectively installing the camera in the parking system.
  • Fig. 2 shows a schematic flowchart of a method 200 for adjusting a camera in a parking system according to an embodiment of the present application.
  • the method 200 may be performed by the device for adjusting the camera, and the device for adjusting the camera may be included in the parking management module 120 in FIG. 1 .
  • the method 200 may include at least part of the following contents.
  • the method 200 can be applied to the on-street parking system mentioned above.
  • the method 200 can be applied to a parking system in a large parking lot such as an underground parking lot.
  • the first vehicle picture is the vehicle picture captured by the camera at the pre-installed position.
  • the camera parameters used for camera pre-installation may be determined through experience, or may be the camera parameters used when the camera was installed last time.
  • the camera parameters used for camera pre-installation may be determined by the device for adjusting the camera in the parking system and sent to the hardware engineer, and then the hardware engineer pre-installs the camera based on the received camera parameters.
  • the camera parameters may also be determined by hardware engineers themselves.
  • the camera parameters may include but not limited to at least one of the following parameters: camera height, side angle, and depression angle.
  • the height of the camera is the distance from the camera to the ground
  • the side angle of the camera is the angle between the main line of sight of the camera and the horizontal line
  • the depression angle of the camera is the angle between the main line of sight of the camera and the vertical line.
  • the target parking area may include parking spaces within the field of view of the camera. Considering that some drivers do not park according to regulations and may park their vehicles in areas other than parking spaces, the target parking area may also include non-parking areas within the field of view of the camera.
  • the camera may directly send the first vehicle picture to the device for adjusting the camera.
  • the camera can store the first vehicle picture in the cloud, so that the device for adjusting the camera can obtain the first vehicle picture from the cloud.
  • the acquisition of the first vehicle picture captured by the camera above may be a directly acquired first vehicle picture.
  • the camera can only transmit a vehicle picture to the device for adjusting the camera every multiple frames, which is prone to the problem of discontinuous vehicle pictures acquired.
  • acquiring the first vehicle picture captured by the camera may include: acquiring a first video stream captured by the camera, and intercepting the first vehicle picture from the first video stream.
  • the video stream is acquired first, and then the vehicle picture is intercepted from the video stream. Since the camera will transmit each frame of video stream to the device for adjusting the camera, the multiple vehicle pictures captured by the device for adjusting the camera from the video stream are continuous, that is, the device for adjusting the camera can obtain the vehicle images of each frame. Pictures are helpful for follow-up operations.
  • the device for adjusting the camera may also use a tracking algorithm to track each vehicle in the continuous first video stream. For example, use a tracking algorithm to determine which vehicle in the current frame corresponds to which vehicle in the previous frame. In this way, the problem of misidentifying vehicles in different frames can be avoided.
  • the tracking algorithm may be any one of the following algorithms: Kalman filter algorithm, optical flow method, particle filter algorithm or mean shift (Mean Shift) algorithm, etc.
  • the recognition picture According to the first vehicle picture and the vehicle area recognition model, obtain a recognition picture after recognizing a plurality of vehicles, the recognition picture identifies the recognition region of each vehicle in the plurality of vehicles, and includes the recognition region of each vehicle An identifier, which is used to indicate the degree of overlap between the recognition areas of multiple vehicles.
  • the vehicle area recognition model may be obtained from the vehicle picture of the sample vehicle and the recognition area.
  • the vehicle area identification model can obtain the identification area of the vehicle, and no complex calculation model is required.
  • the vehicle recognition model can be, but not limited to, logistic regression (logistic regression, LR), gradient boosting decision tree (gradient boosting decision tree, GBDT), support vector machine (support vector machine, SVM), neural network, etc.
  • logistic regression logistic regression, LR
  • gradient boosting decision tree gradient boosting decision tree
  • support vector machine support vector machine, SVM
  • neural network etc.
  • the neural network here can be a heavyweight neural network, such as a deep neural network (deep neural networks, DNN).
  • the neural network here can also be a lightweight neural network, such as SqueezeNet, ShuffleNet, MobileNet, and Xception.
  • heavyweight neural networks Compared with lightweight neural networks, heavyweight neural networks have higher computing power. Under the same camera parameters, the heavyweight neural network may be able to recognize the license plate information of the vehicle in the vehicle picture captured by the camera, but the lightweight neural network may not be able to correctly recognize the license plate information. Considering that it is impossible to determine the computing power of the actual product in the production process, a lightweight neural network is used in the process of adjusting the camera, so that no matter whether the actual product uses a heavyweight neural network or a lightweight neural network , the device for adjusting the camera can accurately identify the identification area of the vehicle.
  • the processing speed of the lightweight neural network is relatively fast, setting the vehicle area recognition model as a lightweight neural network model enables hardware engineers to adjust the camera to the optimal position in a short period of time.
  • the first vehicle picture and the identification picture can be input into the vehicle area identification model as sample data, so as to update the vehicle area identification model.
  • the sample data used to train the vehicle area recognition model will increase, and then the trained vehicle area recognition model will be more accurate, thereby greatly reducing the number and time of adjusting the camera using the vehicle area recognition model.
  • both the first vehicle picture and the recognition picture may include multiple pictures, for example, each frame has a first vehicle picture and a recognition picture.
  • Fig. 3 is a schematic diagram of a frame recognition picture.
  • the recognition area of the vehicle may be the area formed by the length and width of the vehicle, that is, the recognition area of the vehicle is the length and width of the vehicle composed of rectangles.
  • the recognition area of the vehicle may also be the outline of the vehicle, and in this case, the recognition area of the vehicle is an irregular geometric figure.
  • the degree of overlap of the identification areas can be used to represent the degree of overlap of vehicles, and the degree of overlap between the identification areas of multiple vehicles can be the degree of overlap between two vehicles, that is, the degree of overlap between two vehicles .
  • the recognition area of vehicle 1 is R1
  • the recognition area of vehicle 2 is R2. If the degree of overlap between the recognition area R1 and the recognition area R2 is C, then the degree of overlap between vehicle 1 and vehicle 2 is also C.
  • the identification may include first indication information and/or second indication information, wherein the first indication information is used to indicate identification areas whose overlapping degree between identification areas of multiple vehicles is higher than or equal to a threshold value, and the second indication information uses A recognition region that indicates a plurality of vehicles whose degree of overlap between the recognition regions is below a threshold.
  • the color indicated by the first indication information may be different from the color indicated by the second indication information.
  • the recognition area of a vehicle is a rectangle
  • the recognition area of the two vehicles can be a red rectangle
  • the recognition areas of the two vehicles can be green rectangles.
  • the color indicated by the first indication information and the color indicated by the second indication information are set to be different, which can reflect the difference between the identification areas with higher overlapping degree and lower overlapping degree in a more intuitive way.
  • the first indication information may be a dotted line
  • the second indication information may be a solid line. It is still assumed that the recognition area of the vehicle is a rectangle. If the overlapping degree of the two vehicles is high, the recognition area of the two vehicles can be a dashed box; if there is no overlap between the two vehicles, the recognition area of the two vehicles can be Solid frame.
  • the identifications whose overlapping degree of the recognition area is higher than or equal to the threshold and the identifications whose overlapping degree is lower than the threshold are set to be different, so that after receiving the recognition picture, the hardware engineer can identify the identification area with a higher overlapping degree and the overlapping degree
  • the lower recognition area is clear at a glance, which can avoid the problem that hardware engineers mistake the recognition area with high overlap for low overlap when adjusting the camera, or mistake the recognition area with low overlap for high overlap, which can improve Adjust the efficiency of the camera.
  • the recognition picture can also identify at least two regions whose overlap degree is higher than or equal to the threshold value. Identify the overlap value between regions.
  • the identification picture may include third indication information, where the third indication information is used to indicate an overlap value between at least two identification regions whose overlap degree is higher than or equal to a threshold.
  • the third indication information may be an intersection over union (IOU). IOU is a measure of the overlap between different objects, which can be expressed as the following formula:
  • a and B are the identification areas of the vehicle. From the above formula, it can be known that the IOU is the ratio of the intersection area of A and B to the union area of A and B.
  • the degree of overlap between A and B The larger the IOU, the higher the degree of overlap between A and B.
  • the lower right corner of Fig. 4 shows that the IOU between the recognition regions of vehicle 3 and vehicle 6 is 0.25, and the IOU between the recognition regions of vehicle 6 and vehicle 7 is 0.23. It can be known that the degree of overlap between the vehicle 3 and the vehicle 6 is higher than that between the vehicle 3 and the vehicle 6 .
  • the recognition picture identifies the overlap value between at least two recognition regions whose overlap degree is higher than or equal to the threshold value, so that the hardware engineer can more clearly and quickly understand the degree of overlap between the recognition regions in the recognition picture, thereby The parameters of the camera can be adjusted purposefully and in a targeted manner.
  • the recognition picture can also identify the overlap degree value between at least two recognition regions whose overlap degree is lower than the threshold value.
  • the identification picture may further include identification information of each of the multiple vehicles.
  • the identification information may be an ID.
  • Fig. 3 and Fig. 4 include 7 vehicles whose IDs are 1, 2...7, respectively.
  • the ID may be a unique ID assigned to each vehicle by the device for adjusting the camera, and for multiple frames of different identification pictures, the ID of the same vehicle is the same.
  • the identification picture of the nth frame includes vehicle A, vehicle B, vehicle C and vehicle D, and the IDs of these four vehicles are 1, 2, 3 and 4 respectively.
  • the (n+2)th frame identification picture includes vehicle B, vehicle C and vehicle E, the ID of vehicle B is still 2, and the ID of vehicle C is still 3.
  • the third indication information may include identification information of at least two vehicles whose overlapping degree between identification areas is higher than or equal to a threshold value and overlapping degree values of the at least two vehicles. In this way, the complexity of the algorithm can be reduced.
  • the identification information of the two vehicles at the top of the figure are 6 and 7 respectively, and the overlap value of the identification areas of these two vehicles is 0.23
  • the The three indication information includes identification information 6 and 7, and also includes the IOU value between the two vehicles whose identification information is 6 and 7 respectively, that is, 6-7IOU: 0.23.
  • the overlap value of the identification areas of vehicles whose identification information is 3 and 6 respectively is 0.25.
  • the third indication information includes identification information 3 and 6, and also includes identification information respectively The IOU value between the two vehicles is 3 and 6, that is, 3-6IOU: 0.25.
  • the identification picture may include identification information of the identification area of each vehicle. Similar to the identification information of the vehicle, the identification information of the identification area may be an ID, and the ID of the identification area of each vehicle is unique.
  • the third indication information includes identification information of at least two identification areas whose overlapping degree is higher than or equal to a threshold and an overlapping degree value between the at least two identification areas.
  • the device 124 for adjusting the camera can send the recognition picture to the hardware engineer; 1301, the hardware engineer 130 receives the recognition picture, and based on the degree of overlap between the recognition areas of multiple vehicles in the recognition picture , to adjust the parameters of the camera.
  • sending the identification picture may specifically include: packaging multiple identification pictures into a second video stream, and sending the second video stream.
  • the camera can capture the second vehicle picture with the adjusted parameters, and the device for adjusting the camera can acquire the second vehicle picture and identify the vehicle information of the vehicle in the second vehicle picture.
  • the vehicle information may include license plate, vehicle color, vehicle size and so on.
  • the device for adjusting the camera can recognize all the vehicle information of the vehicle in the second vehicle picture, it indicates that the camera has been adjusted to the best position, which is the final installation position of the camera. In addition, the device for adjusting the camera can determine the parking time of the vehicle based on the identified vehicle information, and charge the parking fee for the vehicle in the second vehicle picture.
  • the device for adjusting the camera continues to obtain the identification picture based on the second vehicle picture and the vehicle area recognition model, and the identification picture includes multiple information in the second vehicle picture.
  • the identification area of each of the vehicles is sent to enable the hardware engineer to adjust the parameters of the camera again based on the recognition area of each vehicle.
  • the camera, the device for adjusting the camera, and the hardware engineer cycle through the above operations in sequence until the device for adjusting the camera can accurately identify the vehicle information of all the vehicles in the vehicle picture.
  • a recognition picture that identifies the recognition area of each vehicle is obtained, and the recognition picture includes an indication of the degree of overlap between the recognition areas of the vehicle , and send a picture of that identification.
  • the hardware engineer can adjust the camera parameters in real time and purposefully based on the overlapping degree of the recognition area of each vehicle, which not only greatly reduces the time for adjusting the camera, but also accurately and effectively adjusts the camera parameters.
  • the installation, in this way, is conducive to improving the accuracy of identifying vehicle information.
  • the overlapping degree of the identification area of the vehicle is used to adjust the camera without complex calculations, which can improve the processing speed , and can be applied to more parking systems.
  • Fig. 6 shows a schematic block diagram of an apparatus 600 for adjusting a camera in a parking system according to an embodiment of the present application.
  • the device 600 can execute the method for adjusting the camera in the parking system according to the embodiment of the present application.
  • the device 600 can be the device for adjusting the camera in the method 200 described above, such as 124 in FIG. 5 .
  • the device 600 may include:
  • An acquisition unit 610 configured to acquire a first vehicle picture captured by the camera, where the first vehicle picture includes pictures of multiple vehicles on the target parking area;
  • the processing unit 620 is configured to obtain, according to the first vehicle picture and the vehicle area recognition model, a recognition picture after the recognition of the plurality of vehicles, where the recognition picture identifies the recognition of each vehicle in the plurality of vehicles area, and the identification picture includes an identification of the identification area of each vehicle, and the identification is used to indicate the degree of overlap between the identification areas of the plurality of vehicles;
  • the communication unit 630 is configured to send the recognition picture, wherein the degree of overlap between the recognition areas of the plurality of vehicles is used to adjust the parameters of the camera.
  • the identification includes first indication information and/or second indication information
  • the first indication information is used to indicate the degree of overlap between the recognition areas of the plurality of vehicles
  • the second indication information is used to indicate identification areas where the degree of overlap between the identification areas of the plurality of vehicles is lower than the threshold.
  • the color indicated by the first indication information is different from the color indicated by the second indication information.
  • the identification picture further identifies an overlap value between at least two vehicles whose overlap between identification areas of the plurality of vehicles is higher than or equal to a threshold.
  • the identification picture further includes identification information and third indication information of each of the plurality of vehicles, and the third indication information includes the degree of overlap between identification areas The identification information of at least two vehicles higher than or equal to the threshold and the overlap value of the at least two vehicles.
  • the parameters of the camera include at least one of the following parameters: the height of the camera, the side angle between the main line of sight of the camera and the horizontal line, and the The angle of depression between the main line of sight and the vertical.
  • the acquiring unit 610 may also be configured to: acquire a second vehicle picture captured by the camera with adjusted parameters;
  • the apparatus 600 may further include: an identification unit 640 configured to identify vehicle information of the vehicle in the second vehicle picture.
  • Fig. 8 is a schematic diagram of the hardware structure of the device for adjusting the camera in the parking system according to the embodiment of the present application.
  • the device 800 for adjusting the camera in the parking system shown in FIG. 8 includes a memory 801 , a processor 802 , a communication interface 803 and a bus 804 .
  • the memory 801 , the processor 802 , and the communication interface 803 are connected to each other through a bus 804 .
  • the memory 801 may be a read-only memory (read-only memory, ROM), a static storage device and a random access memory (random access memory, RAM).
  • the memory 801 can store a program. When the program stored in the memory 801 is executed by the processor 802, the processor 802 and the communication interface 803 are used to execute each step of the method for adjusting the camera in the parking system according to the embodiment of the present application.
  • the processor 802 may be a general-purpose central processing unit (central processing unit, CPU), a microprocessor, an application specific integrated circuit (application specific integrated circuit, ASIC), a graphics processing unit (graphics processing unit, GPU) or one or more
  • the integrated circuit is used to execute related programs to realize the functions required by the units in the device for adjusting the camera in the parking system according to the embodiment of the present application, or to execute the method for adjusting the camera in the parking system according to the embodiment of the present application.
  • the processor 802 may also be an integrated circuit chip with signal processing capability.
  • each step of the method for adjusting the camera in the parking system of the embodiment of the present application may be completed by an integrated logic circuit of hardware in the processor 802 or instructions in the form of software.
  • processor 802 can also be general-purpose processor, digital signal processor (digital signal processing, DSP), ASIC, off-the-shelf programmable gate array (field programmable gate array, FPGA) or other programmable logic device, discrete gate or transistor logic devices, discrete hardware components.
  • DSP digital signal processor
  • ASIC off-the-shelf programmable gate array
  • FPGA field programmable gate array
  • Various methods, steps, and logic block diagrams disclosed in the embodiments of the present application may be implemented or executed.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the steps of the methods disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor.
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, register.
  • the storage medium is located in the memory 801, and the processor 802 reads the information in the memory 801, and combines its hardware to complete the functions required by the units included in the device for adjusting the camera in the parking system of the embodiment of the application, or execute the embodiment of the application The method of adjusting the camera in the parking system.
  • the communication interface 803 implements communication between the apparatus 800 and other devices or communication networks by using a transceiver device such as but not limited to a transceiver.
  • a transceiver device such as but not limited to a transceiver.
  • the first vehicle picture captured by the camera can be acquired through the communication interface 803 .
  • the bus 804 may include pathways for transferring information between various components of the device 800 (eg, memory 801 , processor 802 , communication interface 803 ).
  • the device 800 may also include other devices necessary for normal operation.
  • the apparatus 800 may also include hardware devices for implementing other additional functions.
  • the device 800 may only include components necessary to implement the embodiment of the present application, and does not necessarily include all the components shown in FIG. 8 .
  • the embodiment of the present application also provides a computer-readable storage medium, which stores program codes for execution by the device, and the program codes include instructions for executing the steps in the above-mentioned method for adjusting the camera in the parking system.
  • the embodiment of the present application also provides a computer program product, the computer program product includes a computer program stored on a computer-readable storage medium, the computer program includes program instructions, and when the program instructions are executed by the computer, the The computer executes the above method for adjusting the camera in the parking system.
  • the above-mentioned computer-readable storage medium may be a transitory computer-readable storage medium, or a non-transitory computer-readable storage medium.
  • the disclosed devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the aspects, implementations, implementations or features of the described embodiments can be used alone or in any combination. Aspects of the described embodiments can be implemented by software, hardware or a combination of hardware and software.
  • the described embodiments may also be embodied by a computer-readable medium storing computer-readable code comprising instructions executable by at least one computing device.
  • the computer readable medium can be associated with any data storage device that can store data that can be read by a computer system.
  • Exemplary computer readable media may include read-only memory, random access memory, compact disc read-only memory (CD-ROM), hard disk drive (HDD), digital Video disc (digital video disc, DVD), magnetic tape, and optical data storage device, etc.
  • the computer readable medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.

Abstract

一种停车系统中调整摄像头(110)的方法和装置(124),方法包括:获取摄像头(110)捕获的第一车辆图片,第一车辆图片包括目标停车区域上的多个车辆的图片(210);根据第一车辆图片以及车辆区域识别模型,得到对多个车辆进行识别后的识别图片,识别图片标识出多个车辆中每一个车辆的识别区域,且识别图片包括每一个车辆的识别区域的标识,标识用于指示多个车辆的识别区域之间的重叠度(220);发送识别图片,其中,多个车辆的识别区域之间的重叠度用于调整摄像(110)头的参数(230)。该方法和装置(124)能够准确、有效地对停车系统中的摄像头(110)进行安装。

Description

停车系统中调整摄像头的方法和装置 技术领域
本申请涉及人工智能领域,并且更为具体地,涉及一种停车系统中调整摄像头的方法和装置。
背景技术
随着汽车数量的日益增多,城市停车位的需求量也越来越大。但停车场的建设速度远远落后于汽车增长速度,造成了停车位的严重短缺。为了弥补停车位的不足,许多城市在车流量不大的支线道路边设置了停车位。对这些停车位的收费管理通常都是通过人工计时收费的方式进行的,这种收费方法不仅人力成本高,而且由于路边临时停车路段较长,收费人员无法对所有车位进行管理,从而出现停车费用无法及时收取的问题。
目前一些新的路侧停车场,使用电子设备来监控用户的停车行为,如使用摄像头来实现车辆的停车管理。在这种方案中,摄像头的安装位置及其重要,若摄像头安装不准确,可能会出现无法识别停车位上的部分车辆的问题,从而无法准确对该部分车辆进行收费。
因此,如何准确、有效地安装停车系统中的摄像头,成为一个亟待解决的问题。
发明内容
本申请提供了一种停车系统中调整摄像头的方法和装置,能够准确、有效地对停车系统中的摄像头进行安装。
第一方面,提供了一种停车系统中调整摄像头的方法,该方法包括:获取摄像头捕获的第一车辆图片,所述第一车辆图片包括目标停车区域上的多个车辆的图片;根据所述第一车辆图片以及车辆区域识别模型,得到对所述多个车辆进行识别后的识别图片,所述识别图片标识出所述多个车辆中每一个车辆的识别区域,且所述识别图片包括所述每一个车辆的识别区域的标识, 所述标识用于指示所述多个车辆的识别区域之间的重叠度;发送所述识别图片,其中,所述多个车辆的识别区域之间的重叠度用于调整所述摄像头的参数。
本申请实施例,基于以摄像头的初始参数捕获的车辆图片和车辆区域识别模型得到标识出每一个车辆的识别区域的识别图片,且该识别图片包括用于指示车辆的识别区域之间的重叠度的标识,并发送该识别图片。这样,硬件工程师接收到该识别图片后,能够基于每一个车辆的识别区域的重叠度实时地以及有目的地调整摄像头参数,不仅大大减小了调整摄像头的时间,还可以准确、有效地对摄像头进行安装,如此,有利于提高识别车辆信息的准确率。
进一步地,由于车辆的识别区域相对于具体的车牌信息的识别要简单的多,在本申请实施例中,利用车辆的识别区域的重叠度调整摄像头,不需要复杂的运算,既可以提高处理速度,又能适用于较多的停车系统。
在一些可能的实现方式中,所述标识包括第一指示信息和/或第二指示信息,所述第一指示信息用于指示所述多个车辆的识别区域之间的重叠度高于或等于阈值的识别区域,所述第二指示信息用于指示所述多个车辆的识别区域之间的重叠度低于所述阈值的识别区域。
上述技术方案,将识别区域的重叠度高于或等于阈值的标识和重叠度低于阈值的标识设置为不同,使得硬件工程师在接收到识别图片后,对于重叠度较高的识别区域和重叠度较低的识别区域,一目了然,能够避免硬件工程师在调整摄像头时将重叠度较高的识别区域错认为重叠度低,或者将重叠度较低的识别区域错认为重叠度高的问题,从而可以提高调整摄像头的效率。
在一些可能的实现方式中,所述第一指示信息指示的颜色和所述第二指示信息指示的颜色不同。
该实现方式将第一指示信息指示的颜色和第二指示信息指示的颜色设置为不同,能够以一种更直观的方式体现出重叠度较高以及重叠度较低的识别区域的不同。
在一些可能的实现方式中,所述识别图片还标识出所述多个车辆的识别区域之间重叠度高于或等于阈值的至少两个识别区域之间的重叠度值。
上述技术方案,识别图片标识出重叠度高于或等于阈值的至少两个识别 区域之间的重叠度值,使得硬件工程师可以更清楚、快速地了解识别图片中识别区域之间的重叠程度,从而可以有目的、有针对性地调整摄像头的参数。
在一些可能的实现方式中,所述识别图片还包括所述多个车辆中每一个车辆的标识信息和第三指示信息,所述第三指示信息包括识别区域之间的重叠度高于或等于所述阈值的至少两个车辆的标识信息以及所述至少两个车辆的重叠度值。如此,能够降低算法的复杂性。
在一些可能的实现方式中,所述摄像头的参数包括以下参数中的至少一种:所述摄像头的高度、所述摄像头的主视线和水平线之间的侧角和所述摄像头的主视线和垂直线之间的俯角。
在一些可能的实现方式中,所述方法还包括:根据所述摄像头以调整后的参数捕获的第二车辆图片,识别所述第二车辆图片中的车辆的车牌信息。
第二方面,提供了一种停车系统中调整摄像头的装置,其特征在于,包括用于执行上述第一方面或其各实现方式中的方法的各单元。
第三方面,提供了一种停车系统中调整摄像头的装置,包括:存储器,用于存储程序;处理器,用于执行所述存储器存储的程序,当所述存储器存储的程序被执行时,所述处理器用于执行上述第一方面或其各实现方式中的方法。
第四方面,本申请还提供了一种计算机可读存储介质,存储用于设备执行的程序代码,所述程序代码包括用于执行上述第一方面或其各实现方式中的方法中的步骤的指令。
第五方面,本申请还提供了一种计算机程序产品,所述计算机程序产品包括存储在计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行上述第一方面或其各实现方式中的方法。
附图说明
图1是本申请实施例的一种停车系统的示意性图。
图2是本申请实施例的停车系统中调整摄像头的方法的示意性流程图。
图3是本申请实施例的一种识别图片的示意性图。
图4是本申请实施的另一种识别图片的示意性图。
图5是本申请实施例的另一种停车系统的示意性图。
图6是本申请一个实施例的停车系统中调整摄像头的装置的示意性框图。
图7是本申请另一个实施例的停车系统中调整摄像头的装置的示意性框图。
图8是本申请实施例的停车系统中调整摄像头的装置的结构示意图。
附图标记列表:
110,摄像头;
120,停车管理模块;
121,传输装置;
122,识别装置;
123,收费管理装置;
210,获取摄像头捕获的第一车辆图片,第一车辆图片包括目标停车区域上的多个车辆的图片;
220,根据第一车辆图片以及车辆区域识别模型,得到对多个车辆进行识别后的识别图片,识别图片标识出多个车辆中每一个车辆的识别区域,且识别图片包括每一个车辆的识别区域的标识,该标识用于指示多个车辆的识别区域之间的重叠度;
230,发送识别图片,其中,多个车辆的识别区域之间的重叠度用于调整摄像头的参数;
124,停车系统中调整摄像头的装置;
1241,发送识别图片;
130,硬件工程师;
1301,基于识别图片中多个车辆的识别区域之间的重叠度,调整摄像头的参数;
600,停车系统中调整摄像头的装置;
610,获取单元;
620,处理单元;
630,通信单元;
640,识别单元;
800,停车系统中调整摄像头的装置;
801,存储器;
802,处理器;
803,通信接口;
804,总线。
具体实施方式
下面结合附图,对本申请实施例中的技术方案进行描述。应理解,本说明书中的具体的例子只是为了帮助本领域技术人员更好地理解本申请实施例,而非限制本申请实施例的范围。
应理解,在本申请的各种实施例中,各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
还应理解,本说明书中描述的各种实施方式,既可以单独实施,也可以组合实施,本申请实施例对此不作限定。
除非另有说明,本申请实施例所使用的所有技术和科学术语与本申请的技术领域的技术人员通常理解的含义相同。本申请中所使用的术语只是为了描述具体的实施例的目的,不是旨在限制本申请的范围。
路边临时停车位是指在道路两边设置的停车位,没有固定的出口和入口,一般都是通过人工来收费和管理的。但这种停车管理方式不仅人力成本高、效率低,而且由于路边临时停车路段较长,收费人员无法对所有车位进行管理,从而出现停车费用无法及时收取的问题。
为了解决该问题,出现了一些利用地磁检测器、超声波检测器、停车计时桩或摄像头等电子设备来进行车辆的停车管理的技术,这些技术不再依赖人工来进行停车收费,解决了传统的停车管理方式的高成本、监管难、效率低以及容易漏收费等问题,正在成为未来停车管理的趋势。本申请实施例应用于利用摄像头来实现车辆的停车管理。
图1示出了一种利用摄像头进行停车管理的停车系统的示意性图。其中,停车系统可以包括摄像头110和停车管理模块120,停车管理模块120可以包括传输装置121、识别装置122和收费管理装置123。
可选地,摄像头110可以安装在路边的街灯杆或建筑物上,其中,一个摄像头的视场内可以包括多个停车位,如图1所示,摄像头110的视场内包括4个停车位。若有车辆停在该摄像头110的视场范围内,则摄像头110可以捕获该车辆的车辆图片。
之后,摄像头110可以将捕获的车辆图片发送到停车管理模块120中的传输装置121,例如,摄像头110可以通过局域网络将车辆图片发送到传输装置121。示例性地,传输装置121可以为有线传输装置或者无线传输装置。
识别装置122用于接收传输装置121发送的车辆图片,并基于接收到的车辆图片确定车辆信息。其中,车辆信息可以包括但不限于车牌号、车辆颜色、车辆大小等。
接下来,识别装置122将确定的车辆信息发送给收费管理装置123,收费管理装置123基于车辆信息确定车辆的停车时间,以基于停车时间和计费规则生成计费数据。这样,用户在离开停车位时,可以通过扫描车位上的二维码完成缴费。或者,收费管理装置123可以通过停车管理APP将计费数据推送到用户的手机上。
进一步地,识别装置122还可以基于接收到的车辆图片确定摄像头110覆盖范围内的车位占用状态,如图1中的四个停车位已使用三个,空闲一个。并且识别装置122可以通过停车管理应用程序(application,APP)或者其他方式向用户告知该位置的车位占用装填,从而可以实现区域车位引导的目的。
应理解,图1仅是本申请实施例提供的一种停车系统的示意性图,图中所示的设备、器件、模块等之间的位置关系不构成任何限制。
停车系统进行停车管理的准确性依赖于摄像头的安装位置。比如,如果摄像头与停车位的侧角过小,通常会导致摄像头拍摄的相邻两个车辆的车辆图片出现高度重叠的问题,以至于停车管理模块无法基于车辆图片识别出相邻两个车辆的车牌信息,也就无法基于车牌信息进行停车收费。若摄像头与停车位的侧角过大,虽然停车管理模块更容易识别相邻车辆的车牌信息,但过大的侧角可能会给距离摄像头较近的车辆的车牌识别带来额外的困难。
因此,如何准确、有效地安装停车系统中的摄像头,成为一个亟待解决的问题。
鉴于此,本申请实施例提出了一种停车系统中调整摄像头的方法,在摄 像头进行预安装后,根据预安装的摄像头捕获的车辆图片对该车辆图片进行处理,以使硬件工程师基于处理后的车辆图片对预安装的摄像头实时进行调整,从而能够实现准确、有效地安装停车系统中摄像头目的。
图2示出了本申请实施例的停车系统中调整摄像头的方法200的示意性流程图。方法200可以由调整摄像头的装置执行,调整摄像头的装置可以包括于图1中的停车管理模块120中。如图2所示,方法200可以包括以下内容中的至少部分内容。
可选地,方法200可以应用于上文提到的路侧停车系统中。
可选地,方法200可以应用于大型停车场如地下停车场中的停车系统中。
210,获取摄像头捕获的第一车辆图片,该第一车辆图片包括目标停车区域上的多个车辆的图片。
其中,第一车辆图片是摄像头在预安装位置捕获的车辆图片。可选地,用于摄像头预安装时所采用的摄像头参数可以是通过经验确定的,或者,也可以是上一次安装摄像头时所采用的摄像头参数。
示例性地,用于摄像头预安装时所采用的摄像头参数可以是停车系统中调整摄像头的装置确定并发送给硬件工程师的,之后,硬件工程师基于接收到的摄像头参数对摄像头进行预安装。或者,摄像头参数也可以是硬件工程师自行确定的。
摄像头参数可以包括但不限于以下参数中的至少一项:摄像头高度、侧角以及俯角。其中,摄像头的高度为摄像头距离地面的距离,摄像头的侧角为摄像头的主视线与水平线之间的角度,摄像头的俯角为摄像头的主视线和垂直线之间的角度。
可选地,目标停车区域可以包括摄像头视场内的停车位。考虑到有些司机不按规定停车,可能将车辆停放在停车位之外的区域,因此,目标停车区域还可以包括摄像头视场内的非停车位的区域。
可选地,摄像头可以直接向调整摄像头的装置发送第一车辆图片。
或者,摄像头可以将第一车辆图片存储在云端,从而调整摄像头的装置可以从云端获取到第一车辆图片。
可选地,上文中的获取摄像头捕获的第一车辆图片,可以是直接获取的第一车辆图片。在该方式中,摄像头每隔多帧才可向调整摄像头的装置传输 一张车辆图片,容易产生获取的车辆图片不连续的问题。
或者,获取摄像头捕获的第一车辆图片,可以包括:获取摄像头捕获的第一视频流,并从该第一视频流中截取第一车辆图片。
上述技术方案,先获取视频流,再从视频流中截取车辆图片。由于摄像头会向调整摄像头的装置传输每一帧视频流,这样,调整摄像头的装置从视频流中截取的多张车辆图片之间是连续的,即调整摄像头的装置可以获取到每一帧的车辆图片,有利于后续操作。
可选地,在本申请实施例中,调整摄像头的装置还可以利用追踪算法在连续的第一视频流中对每一个车辆进行追踪。比如,利用追踪算法确定当前帧中的哪个车辆对应上一帧中的哪个车辆。如此,可以避免在不同帧中将车辆错误识别的问题。
其中,追踪算法可以是以下算法中的任意一种:卡尔曼滤波器算法、光流法、粒子滤波算法或均值漂移(Mean Shift)算法等。
220,根据第一车辆图片以及车辆区域识别模型,得到对多个车辆进行识别后的识别图片,该识别图片标识出多个车辆中每一个车辆的识别区域,以及包括每一个车辆的识别区域的标识,该标识用于指示多个车辆的识别区域之间的重叠度。
其中,车辆区域识别模型可以是由样本车辆的车辆图片和识别区域得到的。相应地,车辆区域识别模型能够得到车辆的识别区域即可,不需要复杂的计算模型。
车辆识别模型可以为但不限于逻辑回归(logistic regression,LR)、梯度提升决策树(gradient boosting decision tree,GBDT)、支持向量机(support vector machine,SVM)、神经网络等。
这里的神经网络可以是重量级神经网络,如深度神经网络(deep neural networks,DNN)。或者,这里的神经网络也可以是轻量级神经网络,如SqueezeNet、ShuffleNet、MobileNet和Xception等。
由于相对于轻量级神经网络,重量级神经网络的运算能力较高。在相同的摄像头参数下,可能重量级神经网络可以将摄像头捕获的车辆图片中车辆的车牌信息识别出来,然而轻量级神经网络却可能无法正确识别车牌信息。考虑到无法确定生产环节中实际产品是何种运算能力,因此,在调整摄像头 的过程中使用轻量级神经网络,这样不论实际产品使用重量级神经网络和轻量级神经网络中的任一种,调整摄像头的装置都能够准确地识别出车辆的识别区域。
进一步地,由于轻量级神经网络的处理速度较快,因此,将车辆区域识别模型设置为轻量级神经网络模型,使得硬件工程师能够在较短的时间内将摄像头调整到最佳位置上。
可选地,第一车辆图片和识别图片可以作为样本数据输入到车辆区域识别模型中,以更新车辆区域识别模型。如此,用于训练车辆区域识别模型的样本数据就会增多,继而训练得到的车辆区域识别模型也就越准确,从而大大减小了之后利用该车辆区域识别模型调整摄像头的调整次数和时间。
需要说明的是,第一车辆图片和识别图片都可以包括多张图片,比如,每一帧都有一张第一车辆图片和一张识别图片。
图3为一帧识别图片的示意性图,可选地,如图3所示,车辆的识别区域可以是由车辆的长和宽构成的区域,即车辆的识别区域为以车辆的长和宽构成的矩形。或者,车辆的识别区域也可以是车辆的轮廓,此时,车辆的识别区域为不规则的几何图形。
可选地,识别区域的重叠度可以用于表示车辆的重叠度,多个车辆的识别区域之间的重叠度可以为两两车辆之间的重叠度,即两个车辆之间相互重叠的程度。例如,车辆1的识别区域为R1,车辆2的识别区域为R2,若识别区域R1和识别区域R2之间的重叠度为C,则车辆1和车辆2之间的重叠度也为C。
该标识可以包括第一指示信息和/或第二指示信息,其中,第一指示信息用于指示多个车辆的识别区域之间的重叠度高于或等于阈值的识别区域,第二指示信息用于指示多个车辆的识别区域之间重叠度低于阈值的识别区域。
作为一种示例,第一指示信息指示的颜色和第二指示信息指示的颜色可以不同。比如,假定车辆的识别区域为矩形,若两个车辆靠的太近,重叠度高,则这两辆车的识别区域可以为红色矩形;若两个车辆之间的距离相对较远,两个车辆的识别区域没有重叠,则这两辆车的识别区域可以为绿色矩形。
该实现方式将第一指示信息指示的颜色和第二指示信息指示的颜色设置为不同,能够以一种更直观的方式体现出重叠度较高以及重叠度较低的识 别区域的不同。
作为另一种示例,第一指示信息可以为虚线,第二指示信息可以为实线。仍然假定车辆的识别区域为矩形,若两个车辆的重叠度高,则这两辆车的识别区域可以为虚线框;若两个车辆之间没有重叠,则这两辆车的识别区域可以为实线框。
应理解,本申请实施例中的具体的例子只是为了帮助本领域技术人员更好地理解本发明实施例,而非限制本申请实施例的范围。
上述技术方案,将识别区域的重叠度高于或等于阈值的标识和重叠度低于阈值的标识设置为不同,使得硬件工程师在接收到识别图片后,对于重叠度较高的识别区域和重叠度较低的识别区域,一目了然,能够避免硬件工程师在调整摄像头时将重叠度较高的识别区域错认为重叠度低,或者将重叠度较低的识别区域错认为重叠度高的问题,从而可以提高调整摄像头的效率。
为了使硬件工程师更清楚、快速地知道识别图片中重叠度高于或等于阈值的识别区域之间的具体重叠程度,进一步地,识别图片还可以标识出重叠度高于或等于阈值的至少两个识别区域之间的重叠度值。
在一种实现方式中,识别图片可以包括第三指示信息,第三指示信息用于指示重叠度高于或等于阈值的至少两个识别区域之间的重叠度值。可选地,第三指示信息可以为交并比(intersection over union,IOU)。IOU为衡量不同对象之间的重叠的度量,其可以表示为如下公式:
Figure PCTCN2022107188-appb-000001
其中,A和B均为车辆的识别区域,从上述公式可以知道,IOU为A和B的交集面积与A和B的并集面积之比。
IOU越大,则A和B的重叠程度越高。例如,如图4所示,图4的右下角示出了车辆3和车辆6的识别区域之间的IOU为0.25,车辆6和车辆7的识别区域之间的IOU为0.23。可以知道,车辆3和车辆6之间的重叠度高于车辆3和车辆6之间的重叠度。
上述技术方案,识别图片标识出重叠度高于或等于阈值的至少两个识别区域之间的重叠度值,使得硬件工程师可以更清楚、快速地了解识别图片中识别区域之间的重叠程度,从而可以有目的、有针对性地调整摄像头的参数。
当然,识别图片除了可以标识出重叠度高于或等于阈值的至少两个识别 区域之间的重叠度值,也可以标识出重叠度低于阈值的至少两个识别区域之间的重叠度值。
可选地,在本申请实施例中,识别图片还可以包括多个车辆中每一个车辆的标识信息。作为示例,标识信息可以是ID。再次参考图3和图4,图3和图4包括7个车辆,该7个车辆的ID分别为1,2……7。
需要说明的是,该ID可以是调整摄像头的装置为每一个车辆分配的唯一的ID,针对多帧不同的识别图片,同一个车辆的ID相同。比如,第n帧识别图片包括车辆A、车辆B、车辆C和车辆D,这四个车辆的ID分别为1,2,3和4。第(n+2)帧识别图片包括车辆B、车辆C和车辆E,车辆B的ID仍然为2,车辆C的ID仍然为3。
在这种情况下,第三指示信息可以包括识别区域之间的重叠度高于或等于阈值的至少两个车辆的标识信息以及该至少两个车辆的重叠度值。如此,能够降低算法的复杂性。
例如,如图4所示,图中最上方的两个车辆的标识信息分别为6和7,这两个车辆的识别区域的重叠度值为0.23,从图4的右下角可以看出,第三指示信息包括标识信息6和7,并且还包括标识信息分别为6和7的两个车辆之间的IOU值,即6-7IOU:0.23。类似地,标识信息分别为3和6的车辆的识别区域的重叠度值为0.25,从图4的右下角再次可以看出,第三指示信息包括标识信息3和6,并且还包括标识信息分别为3和6的两个车辆之间的IOU值,即3-6IOU:0.25。
或者,识别图片可以包括每一个车辆的识别区域的标识信息。与车辆的标识信息类似,识别区域的标识信息可以为ID,每一个车辆的识别区域的ID都是唯一的。此时,第三指示信息包括重叠度高于或等于阈值的至少两个识别区域的标识信息以及至少两个识别区域之间的重叠度值。
230,发送识别图片,其中,多个车辆的识别区域之间的重叠度用于调整摄像头的参数。
具体而言,如图5所示,1241,调整摄像头的装置124可以向硬件工程师发送识别图片;1301,硬件工程师130接收识别图片,并基于识别图片中多个车辆的识别区域之间的重叠度,调整摄像头的参数。
可选地,发送识别图片,具体可以包括:将多个识别图片打包为第二视 频流,并发送第二视频流。
进一步地,在230之后,摄像头可以以调整后的参数捕获第二车辆图片,并且调整摄像头的装置可以获取到第二车辆图片,并识别第二车辆图片中车辆的车辆信息。其中,车辆信息可以包括车牌、车辆颜色以及车辆大小等。
若调整摄像头的装置可以将第二车辆图片中车辆的车辆信息都识别出来,则表明摄像头已调整到了最佳位置,该位置为摄像头的最终安装位置。此外,调整摄像头的装置可以基于识别出的车辆信息确定车辆的停车时间,并对第二车辆图片中的车辆进行停车收费。
或者,若仍然不能将第二车辆图片中车辆的车辆信息识别出来,则调整摄像头的装置基于第二车辆图片和车辆区域识别模型,继续得到识别图片,该识别图片包括第二车辆图片中的多个车辆中每一个车辆的识别区域。之后,发送识别图片,以使硬件工程师基于每一个车辆的识别区域再次调整摄像头的参数。
摄像头、调整摄像头的装置以及硬件工程师依次循环上述操作,直至调整摄像头的装置能够将车辆图片中所有车辆的车辆信息都准确识别出来。
应理解,在本申请实施例中,“第一”和“第二”仅仅为了区分不同的对象,但并不对本申请实施例的范围构成限制。
本申请实施例,基于以摄像头的初始参数捕获的车辆图片和车辆区域识别模型得到标识出每一个车辆的识别区域的识别图片,且该识别图片包括用于指示车辆的识别区域之间的重叠度的标识,并发送该识别图片。这样,硬件工程师接收到该识别图片后,能够基于每一个车辆的识别区域的重叠度实时地以及有目的地调整摄像头参数,不仅大大减小了调整摄像头的时间,还可以准确、有效地对摄像头进行安装,如此,有利于提高识别车辆信息的准确率。
进一步地,由于车辆的识别区域相对于具体的车牌信息的识别要简单的多,在本申请实施例中,利用车辆的识别区域的重叠度调整摄像头,不需要复杂的运算,既可以提高处理速度,又能适用于较多的停车系统。
上文详细描述了本申请实施例的方法实施例,下面描述本申请实施例的装置实施例,装置实施例与方法实施例相互对应,因此未详细描述的部分可参见前面各方法实施例,装置可以实现上述方法中任意可能实现的方式。
图6示出了本申请一个实施例的停车系统中调整摄像头的装置600的示意性框图。该装置600可以执行上述本申请实施例的停车系统中调整摄像头的方法,例如,该装置600可以为前述方法200中的调整摄像头的装置,比如图5中的124。
如图6所示,该装置600可以包括:
获取单元610,用于获取摄像头捕获的第一车辆图片,所述第一车辆图片包括目标停车区域上的多个车辆的图片;
处理单元620,用于根据所述第一车辆图片以及车辆区域识别模型,得到对所述多个车辆进行识别后的识别图片,所述识别图片标识出所述多个车辆中每一个车辆的识别区域,且所述识别图片包括所述每一个车辆的识别区域的标识,所述标识用于指示所述多个车辆的识别区域之间的重叠度;
通信单元630,用于发送所述识别图片,其中,所述多个车辆的识别区域之间的重叠度用于调整所述摄像头的参数。
可选地,在本申请一个实施例中,所述标识包括第一指示信息和/或第二指示信息,所述第一指示信息用于指示所述多个车辆的识别区域之间的重叠度高于或等于阈值的识别区域,所述第二指示信息用于指示所述多个车辆的识别区域之间的重叠度低于所述阈值的识别区域。
可选地,在本申请一个实施例中,所述第一指示信息指示的颜色和所述第二指示信息指示的颜色不同。
可选地,在本申请一个实施例中,所述识别图片还标识出所述多个车辆的识别区域之间重叠度高于或等于阈值的至少两个车辆之间的重叠度值。
可选地,在本申请一个实施例中,所述识别图片还包括所述多个车辆中每一个车辆的标识信息和第三指示信息,所述第三指示信息包括识别区域之间的重叠度高于或等于所述阈值的至少两个车辆的标识信息以及所述至少两个车辆的重叠度值。
可选地,在本申请一个实施例中,所述摄像头的参数包括以下参数中的至少一种:所述摄像头的高度、所述摄像头的主视线和水平线之间的侧角和所述摄像头的主视线和垂直线之间的俯角。
可选地,在本申请一个实施例中,所述获取单元610还可以用于:获取所述摄像头以调整后的参数捕获的第二车辆图片;
如图7所示,所述装置600还可以包括:识别单元640,用于识别所述第二车辆图片中车辆的车辆信息。
图8是本申请实施例的停车系统中调整摄像头的装置的硬件结构示意图。图8所示的停车系统中调整摄像头的装置800包括存储器801、处理器802、通信接口803以及总线804。其中,存储器801、处理器802、通信接口803通过总线804实现彼此之间的通信连接。
存储器801可以是只读存储器(read-only memory,ROM),静态存储设备和随机存取存储器(random access memory,RAM)。存储器801可以存储程序,当存储器801中存储的程序被处理器802执行时,处理器802和通信接口803用于执行本申请实施例的停车系统中调整摄像头的方法的各个步骤。
处理器802可以采用通用的中央处理器(central processing unit,CPU),微处理器,应用专用集成电路(application specific integrated circuit,ASIC),图形处理器(graphics processing unit,GPU)或者一个或多个集成电路,用于执行相关程序,以实现本申请实施例的停车系统中调整摄像头的装置中的单元所需执行的功能,或者执行本申请实施例的停车系统中调整摄像头的方法。
处理器802还可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,本申请实施例的停车系统中调整摄像头的方法的各个步骤可以通过处理器802中的硬件的集成逻辑电路或者软件形式的指令完成。
上述处理器802还可以是通用处理器、数字信号处理器(digital signal processing,DSP)、ASIC、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器801,处理器802读取存储器801中的信息,结合其硬件完成本申请实施例的停车系统中调整摄像头的装置中包括的单元所需执行的功能,或者执行本申请实施例的停车 系统中调整摄像头的方法。
通信接口803使用例如但不限于收发器一类的收发装置,来实现装置800与其他设备或通信网络之间的通信。例如,可以通过通信接口803获取摄像头捕获的第一车辆图片。
总线804可包括在装置800各个部件(例如,存储器801、处理器802、通信接口803)之间传送信息的通路。
应注意,尽管上述装置800仅仅示出了存储器、处理器、通信接口,但是在具体实现过程中,本领域的技术人员应当理解,装置800还可以包括实现正常运行所必须的其他器件。同时,根据具体需要,本领域的技术人员应当理解,装置800还可包括实现其他附加功能的硬件器件。此外,本领域的技术人员应当理解,装置800也可仅仅包括实现本申请实施例所必须的器件,而不必包括图8中所示的全部器件。
本申请实施例还提供了一种计算机可读存储介质,存储用于设备执行的程序代码,所述程序代码包括用于执行上述停车系统中调整摄像头的方法中的步骤的指令。
本申请实施例还提供了一种计算机程序产品,所述计算机程序产品包括存储在计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行上述停车系统中调整摄像头的方法。
上述的计算机可读存储介质可以是暂态计算机可读存储介质,也可以是非暂态计算机可读存储介质。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连 接,可以是电性,机械或其它的形式。
本申请中使用的用词仅用于描述实施例并且不用于限制权利要求。如在实施例以及权利要求的描述中使用的,除非上下文清楚地表明,否则单数形式的“一个”和“所述”旨在同样包括复数形式。类似地,如在本申请中所使用的术语“和/或”是指包含一个或一个以上相关联的列出的任何以及所有可能的组合。另外,当用于本申请中时,术语“包括”指陈述的特征、整体、步骤、操作、元素,和/或组件的存在,但不排除一个或一个以上其它特征、整体、步骤、操作、元素、组件和/或这些的分组的存在或添加。
所描述的实施例中的各方面、实施方式、实现或特征能够单独使用或以任意组合的方式使用。所描述的实施例中的各方面可由软件、硬件或软硬件的结合实现。所描述的实施例也可以由存储有计算机可读代码的计算机可读介质体现,该计算机可读代码包括可由至少一个计算装置执行的指令。所述计算机可读介质可与任何能够存储数据的数据存储装置相关联,该数据可由计算机系统读取。用于举例的计算机可读介质可以包括只读存储器、随机存取存储器、紧凑型光盘只读储存器(compact disc read-only memory,CD-ROM)、硬盘驱动器(hard disk drive,HDD)、数字视频光盘(digital video disc,DVD)、磁带以及光数据存储装置等。所述计算机可读介质还可以分布于通过网络联接的计算机系统中,这样计算机可读代码就可以分布式存储并执行。
上述技术描述可参照附图,这些附图形成了本申请的一部分,并且通过描述在附图中示出了依照所描述的实施例的实施方式。虽然这些实施例描述的足够详细以使本领域技术人员能够实现这些实施例,但这些实施例是非限制性的;这样就可以使用其它的实施例,并且在不脱离所描述的实施例的范围的情况下还可以做出变化。比如,流程图中所描述的操作顺序是非限制性的,因此在流程图中阐释并且根据流程图描述的两个或两个以上操作的顺序可以根据若干实施例进行改变。作为另一个例子,在若干实施例中,在流程图中阐释并且根据流程图描述的一个或一个以上操作是可选的,或是可删除的。另外,某些步骤或功能可以添加到所公开的实施例中,或两个以上的步骤顺序被置换。所有这些变化被认为包含在所公开的实施例以及权利要求中。
另外,上述技术描述中使用术语以提供所描述的实施例的透彻理解。然 而,并不需要过于详细的细节以实现所描述的实施例。因此,实施例的上述描述是为了阐释和描述而呈现的。上述描述中所呈现的实施例以及根据这些实施例所公开的例子是单独提供的,以添加上下文并有助于理解所描述的实施例。上述说明书不用于做到无遗漏或将所描述的实施例限制到本申请的精确形式。根据上述教导,若干修改、选择适用以及变化是可行的。在某些情况下,没有详细描述为人所熟知的处理步骤以避免不必要地影响所描述的实施例。
以上所述,仅为本申请实施例的具体实施方式,但本申请实施例的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请实施例揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请实施例的保护范围之内。因此,本申请实施例的保护范围应以所述权利要求的保护范围为准。

Claims (10)

  1. 一种停车系统中调整摄像头的方法,其特征在于,包括:
    获取(210)摄像头捕获的第一车辆图片,所述第一车辆图片包括目标停车区域上的多个车辆的图片;
    根据所述第一车辆图片以及车辆区域识别模型,得到(220)对所述多个车辆进行识别后的识别图片,所述识别图片标识出所述多个车辆中每一个车辆的识别区域,且所述识别图片包括所述每一个车辆的识别区域的标识,所述标识用于指示所述多个车辆的识别区域之间的重叠度;
    发送(230)所述识别图片,其中,所述多个车辆的识别区域之间的重叠度用于调整所述摄像头的参数。
  2. 根据权利要求1所述的方法,其特征在于,所述标识包括第一指示信息和/或第二指示信息,所述第一指示信息用于指示所述多个车辆的识别区域之间重叠度高于或等于阈值的识别区域,所述第二指示信息用于指示所述多个车辆的识别区域之间重叠度低于所述阈值的识别区域。
  3. 根据权利要求2所述的方法,其特征在于,所述第一指示信息指示的颜色和所述第二指示信息指示的颜色不同。
  4. 根据权利要求2或3所述的方法,其特征在于,所述识别图片还标识出所述多个车辆的识别区域之间重叠度高于或等于阈值的至少两个识别区域之间的重叠度值。
  5. 根据权利要求4所述的方法,其特征在于,所述识别图片还包括所述多个车辆中每一个车辆的标识信息和第三指示信息,所述第三指示信息包括识别区域之间的重叠度高于或等于所述阈值的至少两个车辆的标识信息以及所述至少两个车辆的重叠度值。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述摄像头的参数包括以下参数中的至少一种:所述摄像头的高度、所述摄像头的主视线和水平线之间的侧角和所述摄像头的主视线和垂直线之间的俯角。
  7. 根据权利要求1至6中任一项所述的方法,其特征在于,所述方法还包括:
    获取所述摄像头以调整后的参数捕获的第二车辆图片;
    识别所述第二车辆图片中车辆的车辆信息。
  8. 一种停车系统中调整摄像头的装置,其特征在于,包括:
    获取单元(610),用于获取摄像头捕获的第一车辆图片,所述第一车辆图片包括目标停车区域上的多个车辆的图片;
    处理单元(620),用于根据所述第一车辆图片以及车辆区域识别模型,得到对所述多个车辆进行识别后的识别图片,所述识别图片标识出所述多个车辆中每一个车辆的识别区域,且所述识别图片包括所述每一个车辆的识别区域的标识,所述标识用于指示所述多个车辆的识别区域之间的重叠度;
    通信单元(630),用于发送所述识别图片,其中,所述多个车辆的识别区域之间的重叠度用于调整所述摄像头的参数。
  9. 一种停车系统中调整摄像头的装置,其特征在于,包括:
    存储器,用于存储程序;
    处理器,用于执行所述存储器存储的程序,当所述存储器存储的程序被执行时,所述处理器用于执行根据权利要求1至7中任一项所述的停车系统中调整摄像头的方法。
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读介质存储用于设备执行的程序代码,所述程序代码包括用于执行根据权利要求1至7中任一项所述的停车系统中调整摄像头的方法中的步骤的指令。
PCT/CN2022/107188 2021-07-29 2022-07-21 停车系统中调整摄像头的方法和装置 WO2023005808A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110861778.1A CN115691146A (zh) 2021-07-29 2021-07-29 停车系统中调整摄像头的方法和装置
CN202110861778.1 2021-07-29

Publications (1)

Publication Number Publication Date
WO2023005808A1 true WO2023005808A1 (zh) 2023-02-02

Family

ID=85058328

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/107188 WO2023005808A1 (zh) 2021-07-29 2022-07-21 停车系统中调整摄像头的方法和装置

Country Status (2)

Country Link
CN (1) CN115691146A (zh)
WO (1) WO2023005808A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090234553A1 (en) * 2008-03-13 2009-09-17 Fuji Jukogyo Kabushiki Kaisha Vehicle running control system
JP2010117800A (ja) * 2008-11-11 2010-05-27 Toshiba It & Control Systems Corp 駐車場監視装置及び方法
CN102129785A (zh) * 2011-03-18 2011-07-20 沈诗文 大场景停车场智能管理系统
JP2014039217A (ja) * 2012-08-20 2014-02-27 Mitsubishi Heavy Ind Ltd 車両情報認識システム
CN108297794A (zh) * 2018-01-11 2018-07-20 阿尔派株式会社 停车支援装置及行驶预测线显示方法
CN110910655A (zh) * 2019-12-11 2020-03-24 深圳市捷顺科技实业股份有限公司 一种停车管理方法、装置及设备
CN111009131A (zh) * 2019-12-05 2020-04-14 成都思晗科技股份有限公司 一种基于图像识别的高位视频智能停车系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090234553A1 (en) * 2008-03-13 2009-09-17 Fuji Jukogyo Kabushiki Kaisha Vehicle running control system
JP2010117800A (ja) * 2008-11-11 2010-05-27 Toshiba It & Control Systems Corp 駐車場監視装置及び方法
CN102129785A (zh) * 2011-03-18 2011-07-20 沈诗文 大场景停车场智能管理系统
JP2014039217A (ja) * 2012-08-20 2014-02-27 Mitsubishi Heavy Ind Ltd 車両情報認識システム
CN108297794A (zh) * 2018-01-11 2018-07-20 阿尔派株式会社 停车支援装置及行驶预测线显示方法
CN111009131A (zh) * 2019-12-05 2020-04-14 成都思晗科技股份有限公司 一种基于图像识别的高位视频智能停车系统
CN110910655A (zh) * 2019-12-11 2020-03-24 深圳市捷顺科技实业股份有限公司 一种停车管理方法、装置及设备

Also Published As

Publication number Publication date
CN115691146A (zh) 2023-02-03

Similar Documents

Publication Publication Date Title
AU2020100946A4 (en) Multi-source traffic information sensing roadside device for smart highway
CN108877269B (zh) 一种交叉路口车辆状态检测及v2x广播方法
WO2021098211A1 (zh) 一种路况信息的监测方法及装置
CN108765975B (zh) 路侧垂直停车场管理系统及方法
CN111325858B (zh) 针对路边临时停车位实现自动计费管理的方法
CN111405196A (zh) 一种基于视频拼接的车辆管理的方法及系统
CN109360442A (zh) 一种智慧型路边停车位管理系统
WO2023179416A1 (zh) 确定车辆进出泊位的方法、装置、设备和存储介质
CN111951598B (zh) 一种车辆跟踪监测方法、装置及系统
CN114627409A (zh) 一种车辆异常变道的检测方法及装置
WO2023005808A1 (zh) 停车系统中调整摄像头的方法和装置
JP2020095623A (ja) 画像処理装置および画像処理方法
Chandrasekaran et al. Computer vision based parking optimization system
CN110880205B (zh) 一种停车收费方法及装置
CN115311891B (zh) 路边和停车场空闲停车位的共享方法、系统及存储介质
CN113345118B (zh) 停车收费管理方法、系统及存储介质
CN110853394A (zh) 一种基于ai技术的地下停车场停车与寻车方法及系统
CN114333084B (zh) 一种基于nb-iot的停车收费系统、智慧车牌及地磁
WO2022226798A1 (zh) 自动泊车方法、装置及系统
WO2018227532A1 (zh) 一种全智能无人值守停车系统及方法
CN114038227B (zh) 一种基于智能充电地锁的停车及充电管理方法及系统
CN115601738A (zh) 停车信息获取方法、装置、设备、存储介质及程序产品
TW202341006A (zh) 物件追蹤整合方法及整合裝置
CN111709354B (zh) 识别目标区域的方法、装置、电子设备和路侧设备
CN114449481A (zh) 基于v2x技术确定所在车道当前信号灯灯色的方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22848413

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE