CN110852250B - Vehicle weight removing method and device based on maximum area method and storage medium - Google Patents

Vehicle weight removing method and device based on maximum area method and storage medium Download PDF

Info

Publication number
CN110852250B
CN110852250B CN201911083603.1A CN201911083603A CN110852250B CN 110852250 B CN110852250 B CN 110852250B CN 201911083603 A CN201911083603 A CN 201911083603A CN 110852250 B CN110852250 B CN 110852250B
Authority
CN
China
Prior art keywords
vehicle
image
rectangular frame
area index
vehicle image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911083603.1A
Other languages
Chinese (zh)
Other versions
CN110852250A (en
Inventor
纪艺慧
魏朝东
曾鹏
陈志飞
聂志巧
刘昱龙
潘锟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guotou Intelligent Xiamen Information Co ltd
China Electronics Engineering Design Institute Co Ltd
Original Assignee
Xiamen Meiya Pico Information Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Meiya Pico Information Co Ltd filed Critical Xiamen Meiya Pico Information Co Ltd
Priority to CN201911083603.1A priority Critical patent/CN110852250B/en
Publication of CN110852250A publication Critical patent/CN110852250A/en
Application granted granted Critical
Publication of CN110852250B publication Critical patent/CN110852250B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/63Scene text, e.g. street names
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a vehicle weight removing and combining device based on a maximum area method, which is characterized in that frame images in a video are obtained, and a target detection algorithm is used for carrying out vehicle detection on each frame image to obtain a vehicle image containing a rectangular frame of a complete vehicle; generating a vehicle ID in the vehicle image through a target tracking algorithm, and obtaining a vehicle image set of a vehicle in the frame image corresponding to the vehicle ID; and respectively calculating the area index of the rectangular frame of each vehicle image in the vehicle image set, and marking the vehicle image corresponding to the calculated maximum area index as a vehicle weight-removing image. The vehicle weight removing method and the vehicle weight removing device can effectively reduce repeated vehicle data, reduce vehicle information extraction rear-end load and greatly improve system performance.

Description

Vehicle weight removing method and device based on maximum area method and storage medium
Technical Field
The invention relates to the field of video image processing, in particular to a vehicle weight-removing method and device based on a maximum area method and a storage medium.
Background
In the video data processing, data processing methods meeting the requirements of different application scenes are developed according to specific service requirements, and vehicle weight removal is one of the methods applied to video structured processing. The vehicle weight removal is mainly applied to the weight removal of the same running or static vehicle in a video, and finally a vehicle image which is most consistent with a service scene is output, such as the clearest and most complete vehicle image. The vehicle weight removal can be applied to the extraction of the video vehicle structural information, the vehicle structural processing comprises vehicle color recognition, vehicle type recognition, license plate recognition and the like, repeated vehicle data can be effectively reduced, the load of the rear end of the vehicle information extraction is reduced, and the performance of the device is greatly improved.
In the prior art, algorithms applied to vehicle rearrangement include a motion tracking algorithm and a vehicle feature extraction comparison algorithm, but the basis for obtaining a vehicle image which is most suitable for a service scene cannot be provided, that is, the clearest and most complete vehicle image cannot be obtained from a video, the problems of omission or unclear vehicle image and the like easily occur, and a vehicle image which can obtain all features of the whole vehicle cannot be obtained. The clearest and most complete vehicle image is obtained as the basis for the subsequent structural processing based on the vehicle image, and the vehicle characteristics can be extracted completely.
In view of the above, it is one of the problems to be solved urgently that a new vehicle weight reduction method based on the maximum area method is designed to obtain a clear and complete vehicle image.
Disclosure of Invention
Aiming at the problems that the vehicle weight removal can not output the vehicle image which is most consistent with a service scene, most clear and complete and the like. An object of the embodiments of the present application is to provide a vehicle weight reduction method, apparatus and storage medium based on the maximum area method, so as to solve the technical problems mentioned in the above background.
In a first aspect, an embodiment of the present application provides a vehicle weight loss method based on a maximum area method, including the following steps:
s1: acquiring frame images in a video, and performing vehicle detection on each frame image through a target detection algorithm to obtain a vehicle image of a rectangular frame containing a complete vehicle;
s2: generating a vehicle ID in the vehicle image through a target tracking algorithm, and obtaining a vehicle image set of a vehicle in the frame image corresponding to the vehicle ID; and
s3: and respectively calculating the area index of the rectangular frame of each vehicle image in the vehicle image set, and marking the vehicle image corresponding to the calculated maximum area index as a vehicle weight-removing image.
In some embodiments, the area index S of the vehicle image is calculated in step S3 by:
Figure BDA0002264695560000021
wherein, the end point of the lower left corner of the vehicle image is used as the origin, x 0 Is the abscissa, y, of the endpoint of the lower left corner of the rectangular frame 0 Is the ordinate, w, of the endpoint at the lower left corner of the rectangular frame 0 Is the length of a rectangular frame, h 0 Is the width of the rectangular frame, w is the length of the vehicle image, h is the width of the vehicle image, b represents a constant of the degree to which the rectangular frame is close to the edge of the vehicle image, k represents a coefficient for reducing the area index of the rectangular frame when the rectangular frame is close to the edge of the vehicle image, and k takes 0.1. The image with the clearest and most complete vehicle can be obtained by calculating the area index with the largest vehicle image, and more characteristics about vehicle information can be obtained in the image.
In some embodiments, the target tracking algorithm comprises a DeepSORT algorithm. The deep SORT algorithm is an improved algorithm based on the SORT algorithm, can realize online tracking and judges whether the vehicles in the two vehicle images are the same vehicle.
In some embodiments, the target detection algorithm comprises a Yolo algorithm. The position of the target can be accurately identified and detected by using the Yolo algorithm, only one CNN operation is needed, and the algorithm speed is high.
In some embodiments, step S3 further comprises:
if the calculated area index of the rectangular frame of the vehicle image of the vehicle ID has no history of the maximum area index, or if the recorded maximum area index exists and the area index of the rectangular frame of the vehicle image is larger than the recorded maximum area index of the vehicle ID, the area index of the rectangular frame of the vehicle image is updated to the maximum area index of the vehicle ID.
The vehicle picture of the vehicle picture with the largest area index can be accurately obtained through the steps, the vehicle weight can be effectively removed, and the clearest and most complete vehicle image can be obtained.
In a second aspect, an embodiment of the present application further provides a vehicle weight discharging device based on a maximum area method, including:
the vehicle detection module is configured to acquire frame images in the video, and perform vehicle detection on each frame image through a target detection algorithm to obtain a vehicle image containing a rectangular frame of a complete vehicle;
the vehicle tracking module is configured to generate a vehicle ID in the vehicle image through a target tracking algorithm and obtain a vehicle image set of a vehicle in the frame image corresponding to the vehicle ID; and
and the index calculation module is configured to calculate the area index of the rectangular frame of each vehicle image in the vehicle image set respectively, and mark the vehicle image corresponding to the calculated maximum area index as a vehicle weight ranking image.
In some embodiments, the area index S of the vehicle image is calculated in the index calculation module by:
Figure BDA0002264695560000031
wherein, the end point of the lower left corner of the vehicle image is used as the origin, x 0 Is the abscissa, y, of the endpoint of the lower left corner of the rectangular frame 0 Is the ordinate, w, of the endpoint at the lower left corner of the rectangular frame 0 Is the length of a rectangular frame, h 0 Is the width of the rectangular frame, w is the length of the vehicle image, h is the width of the vehicle image, b represents a constant of the degree to which the rectangular frame is close to the edge of the vehicle image, k represents a coefficient for reducing the area index of the rectangular frame when the rectangular frame is close to the edge of the vehicle image, and k takes 0.1. The image with the clearest and most complete vehicle can be obtained by calculating the area index with the largest vehicle image, and more characteristics about vehicle information can be obtained in the image.
In some embodiments, the target tracking algorithm comprises a DeepSORT algorithm. The DeepSORT algorithm is an improved algorithm based on the SORT algorithm, can realize online tracking, and judges whether the vehicles in the two vehicle images are the same vehicle.
In some embodiments, the target detection algorithm comprises a Yolo algorithm. The position of the target can be accurately identified and detected by using the Yolo algorithm, only one CNN operation is needed, and the algorithm speed is high.
In some embodiments, the index calculation module is configured to further include:
if the calculated area index of the rectangular frame of the vehicle image of the vehicle ID has no history of the maximum area index, or if the recorded maximum area index exists and the area index of the rectangular frame of the vehicle image is larger than the recorded maximum area index of the vehicle ID, the area index of the rectangular frame of the vehicle image is updated to the maximum area index of the vehicle ID.
The vehicle picture with the largest area index can be accurately obtained through the steps, the vehicle weight can be effectively removed, and the clearest and most complete vehicle image can be obtained.
In a third aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the method as described in any implementation manner of the first aspect.
The embodiment of the application discloses a vehicle weight removing method and device based on a maximum area method. The embodiment of the application can effectively meet the condition that all characteristics of the whole vehicle are concerned more. The vehicle weight removing method and the vehicle weight removing device can effectively reduce repeated vehicle data, reduce vehicle information extraction rear-end load and greatly improve system performance. The finally obtained vehicle weight-removing picture can be applied to extraction of video vehicle structural information, such as vehicle color recognition, vehicle type recognition, license plate recognition and the like.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required to be used in the description of the embodiments will be briefly introduced below, and it is apparent that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings may be obtained based on these drawings without creative efforts.
FIG. 1 is an exemplary device architecture diagram in which one embodiment of the present application may be applied;
FIG. 2 is a schematic flow chart of a vehicle weight reduction method based on the maximum area method according to an embodiment of the invention;
FIG. 3 is a schematic diagram of a vehicle weight loss device based on the maximum area method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a computer device suitable for implementing the electronic device according to the embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 illustrates an exemplary device architecture 100 to which a maximum area method-based vehicle weight removal method or a maximum area method-based vehicle weight removal device of an embodiment of the present application may be applied.
As shown in fig. 1, the apparatus architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various applications, such as data processing type applications, file processing type applications, and the like, may be installed on the terminal devices 101, 102, 103.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices including, but not limited to, smart phones, tablet computers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server that provides various services, such as a background data processing server that processes files or data uploaded by the terminal devices 101, 102, 103. The background data processing server can process the acquired files or data to generate a processing result.
It should be noted that the vehicle weight elimination method based on the maximum area method provided in the embodiment of the present application may be executed by the server 105, and may also be executed by the terminal devices 101, 102, and 103, and accordingly, the vehicle weight elimination device based on the maximum area method may be provided in the server 105, and may also be provided in the terminal devices 101, 102, and 103.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. In the case where the processed data does not need to be acquired from a remote location, the above device architecture may not include a network, but only a server or a terminal device.
Fig. 2 shows a vehicle weight loss method based on the maximum area method, which is disclosed by the embodiment of the application, and the method comprises the following steps:
s1: frame images in the video are obtained, and vehicle detection is carried out on each frame image through a target detection algorithm to obtain a vehicle image containing a rectangular frame of a complete vehicle.
In a specific embodiment, a part of the acquired video is selected to be read, and frame images in the video are acquired, wherein the frame images can be sorted according to a certain sequence, and in a preferred embodiment, the frame images with a certain sequence are obtained by sorting according to time. And traversing all the frame images, if the frame images cannot be completely traversed, performing the step S1, and if the frame images cannot be completely traversed, traversing the recorded vehicle information of each vehicle ID, including the vehicle images.
In a preferred embodiment, the target detection algorithm includes a Yolo algorithm, and the target detection is performed on each frame image through the Yolo algorithm to detect the target vehicle in the frame image. In a specific embodiment, the Yolo algorithm uses a convolutional network to extract features, and then uses a fully-connected layer to obtain a predicted value. The network structure refers to the GooLeNet model, which contains 24 convolutional layers and 2 fully-connected layers, and for the convolutional layers, 1x1 convolution is mainly used to make channle reduction, and then 3x3 convolution follows. For convolutional and fully-connected layers, the Leaky ReLU activation function is used: max (x, 0.1 x). But the last layer uses a linear activation function. In addition to adopting this structure above, a lightweight version of Fast Yolo, which uses only 9 convolutional layers and uses fewer convolutional kernels in the convolutional layers, can also be used. The final detection results in obtaining an image of the vehicle with the detected vehicle including the complete vehicle and framing the image with the complete vehicle with a rectangular frame. In addition, the Yolo algorithm includes Yolo-v1 algorithm and Yolo9000 algorithm, and in other alternative embodiments, other algorithms with the same function or similar functions may be adopted to meet the requirements of target detection or other specific service scenarios.
S2: and generating a vehicle ID in the vehicle image through a target tracking algorithm, and obtaining a vehicle image set of the vehicle in the frame image corresponding to the vehicle ID.
In a particular embodiment, the target tracking algorithm includes a DeepsORT algorithm. The deep SORT algorithm is an improved algorithm based on the SORT algorithm, can realize online tracking, judges whether vehicles in two vehicle images are the same vehicle or not, and realizes motion tracking. In other optional embodiments, other target tracking algorithms such as the SORT algorithm may also be selected for target tracking, as long as the requirements of the corresponding service scenarios can be met. And finally, obtaining the integer number identification code of the vehicle, namely the vehicle ID, wherein the vehicle ID has uniqueness, and the vehicles with the same vehicle ID are represented as the same vehicle.
S3: and respectively calculating the area index of the rectangular frame of each vehicle image in the vehicle image set, and marking the vehicle image corresponding to the calculated maximum area index as a vehicle weight-removing image.
In a particular embodiment, the area index S of the vehicle image may be calculated by:
Figure BDA0002264695560000061
wherein, the end point of the lower left corner of the vehicle image is taken as the origin, x 0 Is the abscissa, y, of the endpoint of the lower left corner of the rectangular frame 0 Is the ordinate, w, of the endpoint at the lower left corner of the rectangular frame 0 Is the length of a rectangular frame, h 0 The width of the rectangular frame, w the length of the vehicle image, h the width of the vehicle image, and b a constant representing the degree of the rectangular frame approaching the edge of the vehicle image, and in a general case, b may be 100.k denotes a coefficient for reducing the area index of the rectangular frame when the rectangular frame is close to the edge of the vehicle image, and k takes 0.1. Under the condition of x 0 < b or x 0 +w 0 > w-b or y 0 +h 0 In the case of h-b, k may be used to reduce the coefficients of the image of the vehicle near the left, right, and bottom edges. At this time, k is smaller than 1, and can be set to 0.1, so as to further draw the difference between the area indexes of the vehicles close to the left edge, the right edge and the lower edge and the vehicles at other positions, and screen out the image of the proper vehicle positioned at the middle upper part. By calculating the area index in this way, a vehicle image having the largest vehicle area can be calculated. The image with the clearest and most complete vehicle can be obtained by calculating the largest area index of the vehicle image, and the image can contain more characteristics about vehicle information. The larger the area index of the vehicle image is, the more complete and clearer the vehicle is.
In a specific embodiment, step S3 further includes:
if the area index of the rectangular frame of the vehicle image of the vehicle ID calculated does not have the maximum area index history, this indicates that the vehicle is a completely new vehicle that has just been captured, and the vehicle has not appeared before, so there is no history of the maximum area index. Or there is a recorded maximum area index and the area index of the rectangular frame of the vehicle image is larger than the recorded maximum area index of the vehicle ID, the area index of the rectangular frame of the vehicle image is updated to the maximum area index of the vehicle ID.
In this case, at least one vehicle image with the largest area index can be obtained, the area of the vehicle in the image is the largest, so that all the features of the vehicle are included, and the features including the vehicle type, the vehicle color, the vehicle rearview mirror and the like can be obtained, so as to facilitate further feature extraction and processing for the features.
With further reference to fig. 3, as an implementation of the methods shown in the above figures, the present application provides an embodiment of a vehicle weight reduction device based on the maximum area method, which corresponds to the embodiment of the method shown in fig. 2, and which is particularly applicable to various electronic devices.
The embodiment of the application specifically comprises:
the vehicle detection module 1 is configured to acquire frame images in a video, and perform vehicle detection on each frame image through a target detection algorithm to obtain a vehicle image containing a rectangular frame of a complete vehicle;
the vehicle tracking module 2 is configured to generate a vehicle ID in the vehicle image through a target tracking algorithm, and obtain a vehicle image set of a vehicle in the frame image corresponding to the vehicle ID;
and the index calculation module 3 is configured to calculate the area index of the rectangular frame of each vehicle image in the vehicle image set respectively, and mark the vehicle image corresponding to the calculated maximum area index as a vehicle weight ranking image.
In a specific embodiment, the area index S of the vehicle image is calculated in the index calculation module 3 by:
Figure BDA0002264695560000071
wherein, the end point of the lower left corner of the vehicle image is used as the origin, x 0 Is the abscissa, y, of the endpoint of the lower left corner of the rectangular frame 0 Is the ordinate, w, of the endpoint at the lower left corner of the rectangular frame 0 Is the length of a rectangular frame, h 0 Is the width of the rectangular frame, w is the length of the vehicle image, h is the width of the vehicle image, b represents a constant of the degree to which the rectangular frame is close to the edge of the vehicle image, k represents a coefficient for reducing the area index of the rectangular frame when the rectangular frame is close to the edge of the vehicle image, and k takes 0.1. Under the condition of x 0 < b or x 0 +w 0 > w-b or y 0 +h 0 In the case of h-b, k may be used to reduce the coefficients of the image of the vehicle near the left, right, and lower edges. At this time, k is smaller than 1, and can be set to 0.1, so as to further draw the difference between the area indexes of the vehicles close to the left edge, the right edge and the lower edge and the vehicles at other positions, and screen out the image of the proper vehicle positioned at the middle upper part. When the area index is calculated in this way, a vehicle image having the largest vehicle area can be calculated. The image with the clearest and most complete vehicle can be obtained by calculating the largest area index of the vehicle image, and the image can contain more characteristics about vehicle information. The larger the area index of the vehicle image is, the more complete and clearer the vehicle is.
In a particular embodiment, the target tracking algorithm includes a DeepsORT algorithm. The DeepSORT algorithm is an improved algorithm based on the SORT algorithm, can realize online tracking, and judges whether the vehicles in the two vehicle images are the same vehicle.
In a particular embodiment, the target detection algorithm comprises a Yolo algorithm. The position of the target can be accurately identified and detected by using the Yolo algorithm, only one CNN operation is needed, and the algorithm speed is high.
In a specific embodiment, the index calculation module 3 is configured to further include:
if the calculated area index of the rectangular frame of the vehicle image of the vehicle ID has no history of the maximum area index, or if the recorded maximum area index exists and the area index of the rectangular frame of the vehicle image is larger than the recorded maximum area index of the vehicle ID, the area index of the rectangular frame of the vehicle image is updated to the maximum area index of the vehicle ID.
The vehicle picture of the vehicle picture with the largest area index can be accurately obtained through the steps, the vehicle weight can be effectively removed, and the clearest and most complete vehicle image can be obtained.
The embodiment of the application discloses a vehicle weight removing method and device based on a maximum area method. The embodiment of the application can effectively meet the condition that all the characteristics of the vehicle are concerned more. The vehicle weight removing method and the vehicle weight removing device can effectively reduce repeated vehicle data, reduce vehicle information extraction rear-end load and greatly improve system performance. The finally obtained vehicle weight-removing picture can be applied to extraction of video vehicle structural information, such as vehicle color recognition, vehicle type recognition, license plate recognition and the like. The method and the device can improve the efficiency and the accuracy of subsequent image structuring processing.
Referring now to fig. 4, a schematic diagram of a computer apparatus 400 suitable for use in implementing an electronic device (e.g., the server or terminal device shown in fig. 1) according to an embodiment of the present application is shown. The electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the use range of the embodiment of the present application.
As shown in fig. 4, the computer apparatus 400 includes a Central Processing Unit (CPU) 401 and a Graphic Processor (GPU) 402, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 403 or a program loaded from a storage section 409 into a Random Access Memory (RAM) 404. In the RAM404, various programs and data necessary for the operation of the apparatus 400 are also stored. The CPU 401, GPU402, ROM 403, and RAM404 are connected to each other via a bus 405. An input/output (I/O) interface 406 is also connected to bus 405.
The following components are connected to the I/O interface 406: an input portion 407 including a keyboard, a mouse, and the like; an output portion 408 including a speaker and the like such as, for example, a Liquid Crystal Display (LCD); a storage portion 409 including a hard disk and the like; and a communication section 410 including a network interface card such as a LAN card, a modem, or the like. The communication section 410 performs communication processing via a network such as the internet. The driver 411 may also be connected to the I/O interface 406 as needed. A removable medium 412 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 411 as necessary, so that a computer program read out therefrom is mounted into the storage section 409 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 410, and/or installed from the removable medium 412. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 401 and a Graphics Processor (GPU) 402.
It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable medium or any combination of the two. A computer readable medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor device, apparatus, or a combination of any of the foregoing. More specific examples of the computer readable medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution apparatus, device, or apparatus. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution apparatus, device, or apparatus. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based devices that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present application may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes a vehicle detection module, a vehicle tracking module, an index calculation module, and a vehicle weight removal module. The names of these modules do not limit the modules themselves in some cases, for example, the vehicle detection module may also be described as "configured to acquire frame images in a video, and perform vehicle detection on each frame image through a Yolo algorithm to obtain a vehicle image including a rectangular frame of a complete vehicle".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring frame images in a video, and performing vehicle detection on each frame image through a target detection algorithm to obtain a vehicle image of a rectangular frame containing a complete vehicle; generating a vehicle ID in the vehicle image through a target tracking algorithm, and obtaining a vehicle image set of a vehicle in the frame image corresponding to the vehicle ID; and respectively calculating the area index of the rectangular frame of each vehicle image in the vehicle image set, and marking the vehicle image corresponding to the calculated maximum area index as a vehicle weight-removing image.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements in which any combination of the features described above or their equivalents does not depart from the spirit of the invention disclosed above. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (3)

1. A vehicle weight-removing method based on a maximum area method is characterized by comprising the following steps:
s1: acquiring frame images in a video, and performing vehicle detection on each frame image through a target detection algorithm to obtain a vehicle image containing a rectangular frame of a complete vehicle, wherein the target detection algorithm comprises a Yolo algorithm;
s2: generating a vehicle ID in the vehicle image through a target tracking algorithm, and obtaining a vehicle image set of a vehicle in the frame image corresponding to the vehicle ID, wherein the target tracking algorithm comprises a DeepsORT algorithm; and
s3: respectively calculating the area index of the rectangular frame of each vehicle image in the vehicle image set, and marking the vehicle image corresponding to the calculated maximum area index as a vehicle weight-removing image; if the calculated area index of the rectangular frame of the vehicle image of the vehicle ID has no history of the maximum area index, or there is a recorded maximum area index and the area index of the rectangular frame of the vehicle image is larger than the recorded maximum area index of the vehicle ID, updating the area index of the rectangular frame of the vehicle image to the maximum area index of the vehicle ID;
in the step S3, an area index S of the vehicle image is calculated by the following formula:
Figure FDA0003802080590000011
wherein, the end point of the lower left corner of the vehicle image is used as the origin, x 0 Is the abscissa, y, of the endpoint of the lower left corner of the rectangular frame 0 Is the ordinate, w, of the endpoint of the lower left corner of the rectangular frame 0 Is the length of the rectangular frame, h 0 Taking the width of the rectangular frame, w is the length of the vehicle image, h is the width of the vehicle image, b represents a constant of the degree of the rectangular frame approaching the edge of the vehicle image, b takes 100, k represents a coefficient for reducing the area index of the rectangular frame when the rectangular frame approaches the edge of the vehicle image, and k takes 0.1, so as to screen out the vehicle image in which the vehicle is located at the upper middle position of the image.
2. A vehicle weight removal device based on a maximum area method is characterized by comprising:
the vehicle detection module is configured to acquire frame images in a video, and perform vehicle detection on each frame image through a target detection algorithm to obtain a vehicle image containing a rectangular frame of a complete vehicle, wherein the target detection algorithm comprises a Yolo algorithm;
the vehicle tracking module is configured to generate a vehicle ID in the vehicle image through a target tracking algorithm, and obtain a vehicle image set of a vehicle corresponding to the vehicle ID in the frame image, wherein the target tracking algorithm comprises a DeepsORT algorithm; and
the index calculation module is configured to calculate the area index of the rectangular frame of each vehicle image in the vehicle image set respectively, and mark the vehicle image corresponding to the calculated maximum area index as a vehicle weight-removing image; if the calculated area index of the rectangular frame of the vehicle image of the vehicle ID has no history of the maximum area index, or there is a recorded maximum area index and the area index of the rectangular frame of the vehicle image is larger than the recorded maximum area index of the vehicle ID, updating the area index of the rectangular frame of the vehicle image to the maximum area index of the vehicle ID
The index calculation module calculates an area index S of the vehicle image by the following formula:
Figure FDA0003802080590000021
wherein, the endpoint of the lower left corner of the vehicle image is used as an origin, x 0 Is the abscissa, y, of the endpoint of the lower left corner of the rectangular frame 0 Is the ordinate, w, of the endpoint of the lower left corner of the rectangular frame 0 Is the length of the rectangular frame, h 0 Taking the width of the rectangular frame, w is the length of the vehicle image, h is the width of the vehicle image, b represents a constant of the degree of the rectangular frame approaching the edge of the vehicle image, b takes 100, k represents a coefficient for reducing the area index of the rectangular frame when the rectangular frame approaches the edge of the vehicle image, and k takes 0.1, so as to screen out the vehicle image in which the vehicle is located at the upper middle position of the image.
3. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to claim 1.
CN201911083603.1A 2019-11-07 2019-11-07 Vehicle weight removing method and device based on maximum area method and storage medium Active CN110852250B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911083603.1A CN110852250B (en) 2019-11-07 2019-11-07 Vehicle weight removing method and device based on maximum area method and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911083603.1A CN110852250B (en) 2019-11-07 2019-11-07 Vehicle weight removing method and device based on maximum area method and storage medium

Publications (2)

Publication Number Publication Date
CN110852250A CN110852250A (en) 2020-02-28
CN110852250B true CN110852250B (en) 2022-12-02

Family

ID=69598679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911083603.1A Active CN110852250B (en) 2019-11-07 2019-11-07 Vehicle weight removing method and device based on maximum area method and storage medium

Country Status (1)

Country Link
CN (1) CN110852250B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112037369A (en) * 2020-07-23 2020-12-04 汇纳科技股份有限公司 Unlocking method, system, medium and device of automatic parking spot lock based on vehicle identification
CN112034423B (en) * 2020-09-08 2023-12-26 湖南大学 High-precision mobile vehicle positioning method based on LED visible light communication

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228106A (en) * 2016-06-27 2016-12-14 开易(北京)科技有限公司 The real-time vehicle detection filter method of a kind of improvement and system
CN109034019A (en) * 2018-07-12 2018-12-18 浙江工业大学 A kind of yellow duplicate rows registration number character dividing method based on row cut-off rule
CN110288838A (en) * 2019-07-19 2019-09-27 网链科技集团有限公司 Electric bicycle makes a dash across the red light identifying system and method
CN110348451A (en) * 2019-07-18 2019-10-18 西南交通大学 Case number (CN) automatic collection and recognition methods in railway container cargo handling process

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228106A (en) * 2016-06-27 2016-12-14 开易(北京)科技有限公司 The real-time vehicle detection filter method of a kind of improvement and system
CN109034019A (en) * 2018-07-12 2018-12-18 浙江工业大学 A kind of yellow duplicate rows registration number character dividing method based on row cut-off rule
CN110348451A (en) * 2019-07-18 2019-10-18 西南交通大学 Case number (CN) automatic collection and recognition methods in railway container cargo handling process
CN110288838A (en) * 2019-07-19 2019-09-27 网链科技集团有限公司 Electric bicycle makes a dash across the red light identifying system and method

Also Published As

Publication number Publication date
CN110852250A (en) 2020-02-28

Similar Documents

Publication Publication Date Title
CN108710885B (en) Target object detection method and device
CN108229419B (en) Method and apparatus for clustering images
US9697442B2 (en) Object detection in digital images
CN107220652B (en) Method and device for processing pictures
CN109255337B (en) Face key point detection method and device
CN112668588B (en) Parking space information generation method, device, equipment and computer readable medium
US20220392202A1 (en) Imaging processing method and apparatus, electronic device, and storage medium
CN110298851B (en) Training method and device for human body segmentation neural network
CN110349161B (en) Image segmentation method, image segmentation device, electronic equipment and storage medium
CN112561840A (en) Video clipping method and device, storage medium and electronic equipment
CN111767750A (en) Image processing method and device
CN110852250B (en) Vehicle weight removing method and device based on maximum area method and storage medium
CN112907628A (en) Video target tracking method and device, storage medium and electronic equipment
CN110310293B (en) Human body image segmentation method and device
CN109919220B (en) Method and apparatus for generating feature vectors of video
CN110633597A (en) Driving region detection method and device
CN110852252B (en) Vehicle weight-removing method and device based on minimum distance and maximum length-width ratio
CN110765304A (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN110826497B (en) Vehicle weight removing method and device based on minimum distance method and storage medium
CN112487943B (en) Key frame de-duplication method and device and electronic equipment
CN110634155A (en) Target detection method and device based on deep learning
CN110796698B (en) Vehicle weight removing method and device with maximum area and minimum length-width ratio
CN113033552B (en) Text recognition method and device and electronic equipment
CN111737575B (en) Content distribution method, content distribution device, readable medium and electronic equipment
CN115115836A (en) Image recognition method, image recognition device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230111

Address after: Unit 102-402, No. 12, guanri Road, phase II, Xiamen Software Park, Fujian Province, 361000

Patentee after: XIAMEN MEIYA PICO INFORMATION Co.,Ltd.

Patentee after: CHINA ELECTRONICS ENGINEERING DESIGN INSTITUTE Co.,Ltd.

Address before: Unit 102-402, No. 12, guanri Road, phase II, Xiamen Software Park, Fujian Province, 361000

Patentee before: XIAMEN MEIYA PICO INFORMATION Co.,Ltd.

CP03 Change of name, title or address

Address after: Unit 102-402, No. 12 Guanri Road, Phase II, Software Park, Xiamen Torch High tech Zone, Xiamen, Fujian Province, 361000

Patentee after: Guotou Intelligent (Xiamen) Information Co.,Ltd.

Country or region after: China

Patentee after: China Electronics Engineering Design Institute Co.,Ltd.

Address before: Unit 102-402, No. 12, guanri Road, phase II, Xiamen Software Park, Fujian Province, 361000

Patentee before: XIAMEN MEIYA PICO INFORMATION Co.,Ltd.

Country or region before: China

Patentee before: CHINA ELECTRONICS ENGINEERING DESIGN INSTITUTE Co.,Ltd.

TR01 Transfer of patent right

Effective date of registration: 20240524

Address after: Unit 102-402, No. 12 Guanri Road, Phase II, Software Park, Xiamen Torch High tech Zone, Xiamen, Fujian Province, 361000

Patentee after: Guotou Intelligent (Xiamen) Information Co.,Ltd.

Country or region after: China

Address before: Unit 102-402, No. 12 Guanri Road, Phase II, Software Park, Xiamen Torch High tech Zone, Xiamen, Fujian Province, 361000

Patentee before: Guotou Intelligent (Xiamen) Information Co.,Ltd.

Country or region before: China

Patentee before: China Electronics Engineering Design Institute Co.,Ltd.