CN113569911A - Vehicle identification method and device, electronic equipment and storage medium - Google Patents

Vehicle identification method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113569911A
CN113569911A CN202110722566.5A CN202110722566A CN113569911A CN 113569911 A CN113569911 A CN 113569911A CN 202110722566 A CN202110722566 A CN 202110722566A CN 113569911 A CN113569911 A CN 113569911A
Authority
CN
China
Prior art keywords
vehicle
candidate
similarity
image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110722566.5A
Other languages
Chinese (zh)
Inventor
蒋旻悦
谭啸
孙昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110722566.5A priority Critical patent/CN113569911A/en
Publication of CN113569911A publication Critical patent/CN113569911A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a vehicle identification method, an apparatus, an electronic device and a storage medium, which relate to the field of artificial intelligence, in particular to the technical fields of computer vision, deep learning, and the like, and can be specifically used in smart cities and intelligent traffic scenes. The specific implementation scheme is as follows: acquiring an image of a vehicle to be identified, and extracting first global feature information of the image; acquiring at least one candidate vehicle based on the first global feature information; extracting first attitude characteristic information of a vehicle to be recognized from the image; and acquiring a target vehicle matched with the vehicle to be recognized based on the first posture characteristic information from at least one candidate vehicle. According to the vehicle identification method and device, the vehicle to be identified is accurately identified based on the global features and the posture features, so that the vehicle with high similarity can be screened out from the appearance and/or the posture to serve as the target vehicle, and the accuracy of vehicle identification is improved.

Description

Vehicle identification method and device, electronic equipment and storage medium
Technical Field
The utility model relates to an artificial intelligence field especially relates to technical field such as computer vision, deep learning, specifically can be used to under the scene of wisdom city and intelligent transportation.
Background
In the related art, the vehicle posture in the vehicle picture changes with the change of the shooting angle, and therefore, when vehicle recognition is performed through the appearance characteristics of the vehicle, it is often intended to recognize that two vehicles with similar postures are the same vehicle. Therefore, how to accurately identify the vehicle has become one of important research directions.
Disclosure of Invention
The disclosure provides a vehicle identification method, a vehicle identification device, an electronic device and a storage medium.
According to an aspect of the present disclosure, there is provided a vehicle identification method including:
acquiring an image of a vehicle to be identified, and extracting first global feature information of the image;
acquiring at least one candidate vehicle based on the first global feature information;
extracting first attitude characteristic information of a vehicle to be recognized from the image;
and acquiring a target vehicle matched with the vehicle to be recognized based on the first posture characteristic information from at least one candidate vehicle. According to the vehicle identification method and device, the vehicle to be identified is accurately identified based on the global features and the posture features, so that the vehicle with high similarity can be screened out from the appearance and/or the posture to serve as the target vehicle, and the accuracy of vehicle identification is improved.
According to another aspect of the present disclosure, there is provided a vehicle identification device including:
the global feature extraction module is used for acquiring an image of a vehicle to be identified and extracting first global feature information of the image;
a candidate vehicle obtaining module for obtaining at least one candidate vehicle based on the first global feature information;
the attitude feature extraction module is used for extracting first attitude feature information of the vehicle to be recognized from the image;
and the target vehicle acquisition module is used for acquiring a target vehicle matched with the vehicle to be recognized from at least one candidate vehicle based on the first posture characteristic information.
According to another aspect of the present disclosure, there is provided an electronic device comprising at least one processor, and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the vehicle identification method of the embodiment of the first aspect of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the vehicle identification method of the first aspect of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the vehicle identification method of an embodiment of the first aspect of the present disclosure.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow chart of a vehicle identification method according to one embodiment of the present disclosure;
FIG. 2 is a flow chart of a vehicle identification method according to one embodiment of the present disclosure;
FIG. 3 is a flow chart of a vehicle identification method according to one embodiment of the present disclosure;
FIG. 4 is a schematic illustration of a vehicle identification method according to one embodiment of the present disclosure;
FIG. 5 is a schematic illustration of a vehicle identification method according to one embodiment of the present disclosure;
FIG. 6 is a flow chart of a vehicle identification method according to one embodiment of the present disclosure;
FIG. 7 is a block diagram of a vehicle identification device according to one embodiment of the present disclosure;
fig. 8 is a block diagram of an electronic device for implementing a vehicle identification method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In order to facilitate understanding of the present disclosure, the following description is first briefly made to the technical field to which the present disclosure relates.
Artificial intelligence is the subject of research that makes computers simulate some human mental processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), both at the hardware level and at the software level. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a machine learning technology, a deep learning technology, a big data processing technology, a knowledge map technology and the like.
Deep learning is the intrinsic law and expression level of the learning sample data, and the information obtained in the learning process is very helpful for the interpretation of data such as characters, images and sounds. The final aim of the method is to enable the machine to have the analysis and learning capability like a human, and to recognize data such as characters, images and sounds. Deep learning is a complex machine learning algorithm, and achieves the effect in speech and image recognition far exceeding the prior related art.
The intelligent transportation is a comprehensive transportation management technology which is established by effectively integrating and applying advanced information technology, data communication transmission technology, electronic sensing technology, control technology, computer technology and the like to the whole ground transportation management system, plays a role in a large range in all directions, and is real-time, accurate and efficient.
Computer vision is a interdisciplinary field of science, studying how computers gain a high level of understanding from digital images or videos. From an engineering point of view, it seeks for an automated task that the human visual system can accomplish. Computer vision tasks include methods of acquiring, processing, analyzing and understanding digital images, and methods of extracting high-dimensional data from the real world to produce numerical or symbolic information, for example, in the form of decisions.
The vehicle identification method, apparatus, electronic device, and storage medium of the present disclosure are described below with reference to the accompanying drawings.
Fig. 1 is a flowchart of a vehicle identification method according to an embodiment of the present disclosure, as shown in fig. 1, the method including the steps of:
s101, obtaining an image of a vehicle to be identified, and extracting first global feature information of the image.
And acquiring an image of the vehicle to be identified from a certain angle. In the embodiment of the present disclosure, the image of the vehicle to be recognized may be an image including a part of the vehicle to be recognized, or may be an image including the entire vehicle to be recognized. Alternatively, the image of the vehicle to be identified may be a still image taken, or may be a video image or a composite image in a sequence of video frames, or the like.
The method includes the steps of extracting global features from an image of a vehicle to be identified, and extracting first global feature information, optionally inputting the image of the vehicle to be identified into a neural network, and extracting the first global feature information of the image, wherein the first global feature information may be global features represented by vectors. For example, a feature extraction layer of a neural network is utilized to perform convolution operation and pooling operation on the image of the vehicle to be identified, so as to acquire first global feature information.
Alternatively, the neural network may be any suitable neural network that can extract global feature information, including but not limited to a global convolutional neural network, and the like.
And S102, acquiring at least one candidate vehicle based on the first global feature information.
In the present disclosure, images of different vehicles are collected in advance, and the images of different vehicles are stored in a database. When the first global feature information is acquired, at least one vehicle image similar to the image of the vehicle to be identified may be acquired based on the first global feature information, and then the vehicle corresponding to the vehicle image is taken as a candidate vehicle.
In some implementations, second global feature information of existing vehicle images in the database is extracted and matched with the first global feature information, and at least one candidate vehicle is obtained according to a matching result. The process of extracting the second global feature information may refer to the related description of extracting the first global feature information in step S101, and is not described herein again.
S103, extracting first posture characteristic information of the vehicle to be recognized from the image.
Due to the difference of the driving direction of the vehicle to be recognized, the vehicle attitude and the vehicle component included in the image of the vehicle to be recognized are also different. For example, in some implementations, the driving direction of the vehicle to be recognized is the same as the shooting direction, and the acquired image of the vehicle to be recognized may include a rear turn signal, a trunk, and the like; in some implementations, the driving direction of the vehicle to be recognized is opposite to the shooting direction, and the acquired image of the vehicle to be recognized may include a bumper, a logo, two-side rearview mirrors, two-side headlights and the like.
In order to improve the accuracy of vehicle identification, the embodiment of the disclosure further screens candidate vehicles according to vehicle components in the images, and acquires target vehicles from the candidate vehicles. In some implementations, a posture of a vehicle to be recognized in an image is recognized, a vehicle driving direction is acquired, a component of the vehicle to be recognized in the image is recognized, an image of a vehicle component region is acquired, and then local feature information of the vehicle component is extracted from the image of the vehicle component region, and then the driving direction and the local feature information are taken as first posture feature information. Alternatively, a neural network may be used to extract local feature information of the vehicle to be identified from the image.
And S104, acquiring a target vehicle matched with the vehicle to be recognized from at least one candidate vehicle based on the first posture characteristic information.
After the first posture characteristic information is acquired, at least one vehicle image similar to the image of the vehicle to be recognized in posture characteristic can be acquired based on the first posture characteristic information, and further screening of the candidate vehicle is achieved. As a possible implementation manner, the second posture feature information may be extracted from a picture of the candidate vehicle, the similarity between the first posture feature information of the vehicle to be recognized and the second posture feature information of the candidate vehicle is determined, and the candidate vehicle matched with the vehicle to be recognized is taken as the target vehicle according to the similarity.
In the embodiment of the disclosure, an image of a vehicle to be identified is obtained, and first global feature information of the image is extracted; acquiring at least one candidate vehicle based on the first global feature information; extracting first attitude characteristic information of a vehicle to be recognized from the image; and acquiring a target vehicle matched with the vehicle to be recognized based on the first posture characteristic information from at least one candidate vehicle. According to the vehicle identification method and device, the vehicle to be identified is accurately identified based on the global features and the posture features, so that the vehicle with high similarity can be screened out from the appearance and/or the posture to serve as the target vehicle, and the accuracy of vehicle identification is improved.
Fig. 2 is a flowchart of a vehicle identification method according to another embodiment of the present disclosure, and as shown in fig. 2, on the basis of the above embodiment, at least one candidate vehicle is obtained based on the first global feature information, including the following steps:
s201, obtaining the similarity between the first global feature information and the second global feature information of each vehicle in the database.
And extracting second global feature information of the image of the vehicle in the database, matching the second global feature information with the first global feature information, and obtaining the similarity between the first global feature information and the second global feature information as the similarity between the vehicle to be identified and each vehicle in the database. Optionally, the cosine distance between the first global feature information and the second global feature information may be obtained as the similarity between the vehicle to be identified and each vehicle in the database.
S202, sorting all vehicles in the database according to the similarity, and screening at least one candidate vehicle according to the sorting.
Sorting all vehicles in the database according to the similarity, optionally, taking the vehicle with the similarity larger than a preset threshold as a candidate vehicle, and also taking N vehicles with the maximum similarity as candidate vehicles; where N is a preset positive integer greater than 0, and the vehicle ranked in the top N after ranking may also be used as a candidate vehicle.
In the embodiment of the disclosure, the similarity between the first global feature information and the second global feature information of each vehicle in the database is obtained; and sorting all vehicles in the database according to the similarity, and screening at least one candidate vehicle according to the sorting. According to the method and the device, the candidate vehicles are primarily screened out from the database by effectively utilizing the first global feature information, the calculated amount of a subsequent screening process is reduced, and the efficiency of subsequently identifying the target vehicles is improved conveniently.
Fig. 3 is a flowchart of a vehicle identification method according to an embodiment of the present disclosure, as shown in fig. 3, the method including the steps of:
s301, acquiring the driving direction of the vehicle to be identified from the image.
And extracting the position of the vehicle to be identified from the image, and determining an included angle between the vehicle to be identified and a reference line of the image based on the position. And comparing the included angle with the angle ranges of the plurality of candidate driving directions to determine the target angle range of the included angle. And determining the candidate driving direction corresponding to the target angle range as the driving direction of the vehicle to be identified. For example, if the shooting angle is from south to north, the reference line of the image is a straight line in the east-west direction, the external rectangle of the area where the vehicle is located is used as the detection frame of the vehicle, the central point of the detection frame of the vehicle is used as the position of the vehicle, the position of the vehicle is connected with the preset point, the included angle between the straight line and the reference line is obtained, and the included angle is compared with the angle ranges of the candidate driving directions. Alternatively, as shown in fig. 4, in the embodiment of the present application, 8 candidate traveling directions are used, wherein an angle range corresponding to the east direction is (-22.5 °, 22.5 °), an angle range corresponding to the south-east direction is (22.5 °, 67.5 °), an angle range corresponding to the south-east direction is (67.5 °, 112.5 °), and so on, the angle ranges of all directions can be determined, and then the candidate traveling direction corresponding to the target angle range is determined as the traveling direction of the vehicle to be identified according to the target angle range in which the included angle is located.
S302, local characteristic information of visible vehicle parts of the vehicle to be identified is obtained from the image.
And carrying out component classification detection on the image to acquire a detection frame of the visible vehicle component. In some implementations, the image of the vehicle to be recognized may be input into the classification detection network, the category, the position, and the area of the vehicle component may be output, and the circumscribed rectangle of the vehicle component area may be used as the detection frame of the vehicle component.
And extracting local characteristic information at the corresponding position of the detection frame. In some implementations, the image slice region corresponding to the detection frame is input into the alignment layer of the region of interest, and the local feature information is extracted by the alignment layer of interest, that is, the image region included in the detection frame is divided into a plurality of units, four coordinate positions are calculated and fixed in each unit, the values of the four positions are calculated by using a bilinear interpolation method, and then the maximum pooling operation is performed to obtain the local feature information at the position corresponding to the detection frame.
And S303, taking the driving direction and the local characteristic information as first posture characteristic information of the vehicle to be recognized.
In some implementations, the first pose characteristic information of the vehicle to be recognized includes a driving direction of the vehicle to be recognized and local characteristic information.
In some implementations, the driving direction and the local feature information are input into the full link layer for feature fusion to obtain the first posture feature information.
Optionally, the visible proportion parameter of the visible vehicle component can be determined according to the size of the detection frame and the actual size of the visible vehicle component, and the visible proportion parameter is used as one feature information in the first posture feature information, so that the first posture feature information is enriched, and the accuracy of vehicle identification is further improved. For example, the ratio of the area of the visible vehicle component to the area of the detection frame is used as a visible proportion parameter, and the visible proportion parameter is used as one of the first posture characteristic information.
In the embodiment of the disclosure, the driving direction of the vehicle to be recognized is acquired from the image, the local characteristic information of the visible vehicle component of the vehicle to be recognized is acquired from the image, and the driving direction and the local characteristic information are used as the first posture characteristic information of the vehicle to be recognized. According to the vehicle identification method and device, the driving direction and the local characteristic information acquired based on the image are used as the first posture characteristic information of the vehicle to be identified, so that the target vehicle can be conveniently determined from the candidate vehicles in the follow-up process, and the accuracy of vehicle identification is improved.
Fig. 6 is a flowchart of a vehicle identification method according to one embodiment of the present disclosure, as shown in fig. 4, the method including the steps of:
and S601, acquiring second attitude characteristic information of each candidate vehicle.
For the content of obtaining the second posture characteristic information of the candidate vehicle in step S601, reference may be made to the description of obtaining the first posture characteristic information in the foregoing embodiment, and details are not repeated here.
S602, acquiring the attitude similarity between the first attitude characteristic information and each second attitude characteristic information.
And for each candidate vehicle, matching the first posture characteristic information of any vehicle part with the second posture characteristic information of any vehicle part to acquire the similarity of the vehicle parts. In some implementations, the pose similarity includes a first similarity in the driving direction and a second similarity on the visible vehicle component; in some implementations, the first similarity in the direction of travel and the second similarity on the visible vehicle component may be averaged as the pose similarity; in some implementations, the first similarity in the direction of travel and the second similarity on the visible vehicle component may be weighted averages as the pose similarity;
and S603, identifying the target vehicle from the at least one candidate vehicle according to the attitude similarity.
In some implementations, the pose similarity includes a first similarity in the driving direction and a second similarity on the visible vehicle component, and the target candidate vehicle, from among the at least one candidate vehicle, is obtained for which the first similarity and the second similarity both satisfy respective similarity thresholds. And acquiring the number of the target candidate vehicles, raising the similarity threshold in response to the number being larger than the set value, and reselecting the target candidate vehicles until the number is not larger than the set number. For example, in some implementations, the image of the vehicle to be recognized includes vehicle components such as a bumper, a right side rearview mirror, a right side headlamp, and the like, the candidate vehicles whose first similarity satisfies a first similarity threshold and whose second similarity satisfies a second similarity threshold are taken as target candidate vehicles, if the number of the target candidate vehicles satisfies a set value, the target candidate vehicles are determined to be target vehicles, otherwise, the first and second similarity thresholds are increased, and the target candidate vehicles are reselected according to the updated first and second similarity thresholds until the number is not greater than the set number, so as to obtain the target vehicles. Alternatively, the set number may be 1.
In some implementations, the attitude similarity is an average or weighted average of the first similarity in the driving direction and the second similarity on the visible vehicle component, and the candidate vehicle with the largest attitude similarity is selected as the target vehicle from among the at least one candidate vehicle.
In the embodiment of the disclosure, second attitude characteristic information of each candidate vehicle is acquired, an attitude similarity between the first attitude characteristic information and each second attitude characteristic information is acquired, and a target vehicle is identified from at least one candidate vehicle according to the attitude similarity. According to the method and the device for identifying the target vehicle, the target vehicle is determined from the candidate vehicles according to the first posture characteristic information and the second posture characteristic information, the influence of non-target vehicles similar to the vehicle to be identified in the candidate vehicles can be reduced, the vehicle to be identified is accurately identified, accordingly, vehicles with high similarity can be screened out from the appearance and/or posture to serve as the target vehicle, and the accuracy of vehicle identification is improved.
Fig. 7 is a block diagram of a vehicle recognition device according to an embodiment of the present disclosure, and as shown in fig. 7, a vehicle recognition device 700 includes:
the global feature extraction module 710 is configured to obtain an image of a vehicle to be identified, and extract first global feature information of the image;
a candidate vehicle obtaining module 720, configured to obtain at least one candidate vehicle based on the first global feature information;
the attitude feature extraction module 730 is used for extracting first attitude feature information of the vehicle to be recognized from the image;
and a target vehicle obtaining module 740, configured to obtain, from the at least one candidate vehicle, a target vehicle matching the vehicle to be recognized based on the first posture characteristic information.
According to the vehicle identification method and device, the vehicle to be identified is accurately identified based on the global features and the posture features, so that the vehicle with high similarity can be screened out from the appearance and/or the posture to serve as the target vehicle, and the accuracy of vehicle identification is improved.
It should be noted that the foregoing explanation of the embodiment of the vehicle identification method is also applicable to the vehicle identification device of the embodiment, and is not repeated here.
Further, in a possible implementation manner of the embodiment of the present disclosure, the candidate vehicle obtaining module 720 is further configured to: acquiring the similarity between the first global feature information and second global feature information of each vehicle in the database; and sorting all vehicles in the database according to the similarity, and screening at least one candidate vehicle according to the sorting.
Further, in a possible implementation manner of the embodiment of the present disclosure, the pose feature extraction module 730 is further configured to: acquiring the driving direction of the vehicle to be identified from the image; the local characteristic information of the visible vehicle part of the vehicle to be recognized is obtained from the image, and the driving direction and the local characteristic information are used as the first posture characteristic information of the vehicle to be recognized.
Further, in a possible implementation manner of the embodiment of the present disclosure, the pose feature extraction module 730 is further configured to: carrying out component classification detection on the image to obtain a detection frame of the visible vehicle component; and extracting local characteristic information at the corresponding position of the detection frame.
Further, in a possible implementation manner of the embodiment of the present disclosure, the pose feature extraction module 730 is further configured to: extracting the position of the vehicle to be identified from the image, and determining an included angle between the vehicle to be identified and a reference line of the image based on the position; comparing the included angle with the angle ranges of a plurality of candidate driving directions to determine a target angle range where the included angle is located; and determining the candidate driving direction corresponding to the target angle range as the driving direction of the vehicle to be identified.
Further, in a possible implementation manner of the embodiment of the present disclosure, the target vehicle obtaining module 740 is further configured to: acquiring second attitude characteristic information of each candidate vehicle; acquiring attitude similarity between the first attitude characteristic information and each second attitude characteristic information; and identifying the target vehicle from the at least one candidate vehicle according to the attitude similarity.
Further, in a possible implementation manner of the embodiment of the present disclosure, the attitude similarity includes a first similarity in the driving direction and a second similarity on the visible vehicle component, where the target vehicle obtaining module 740 is further configured to: acquiring a target candidate vehicle with a first similarity and a second similarity meeting respective similarity threshold values from at least one candidate vehicle; and acquiring the number of the target candidate vehicles, raising the similarity threshold in response to the number being larger than the set value, and reselecting the target candidate vehicles until the number is not larger than the set number.
Further, in a possible implementation manner of the embodiment of the present disclosure, the target vehicle obtaining module 740 is further configured to: and selecting the candidate vehicle with the largest attitude similarity from the at least one candidate vehicle as the target vehicle.
Further, in a possible implementation manner of the embodiment of the present disclosure, the pose feature extraction module 730 is further configured to: determining a visible proportion parameter of the visible vehicle component according to the size of the detection frame and the actual size of the visible vehicle component; and taking the visible scale parameter as one feature information in the first posture feature information.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 8 illustrates a schematic block diagram of an example electronic device 800 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 801 executes the respective methods and processes described above, such as the vehicle identification method. For example, in some embodiments, the vehicle identification method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program can be loaded and/or installed onto device 800 via ROM 802 and/or communications unit 809. When the computer program is loaded into RAM 803 and executed by the computing unit 801, one or more steps of the vehicle identification method described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the vehicle identification method in any other suitable manner (e.g., by way of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (21)

1. A vehicle identification method, comprising:
acquiring an image of a vehicle to be identified, and extracting first global feature information of the image;
acquiring at least one candidate vehicle based on the first global feature information;
extracting first attitude characteristic information of the vehicle to be recognized from the image;
and acquiring a target vehicle matched with the vehicle to be recognized based on the first posture characteristic information from the at least one candidate vehicle.
2. The method of claim 1, wherein the obtaining at least one candidate vehicle based on the first global feature information comprises:
acquiring the similarity between the first global feature information and second global feature information of each vehicle in a database;
and sorting all vehicles in the database according to the similarity, and screening the at least one candidate vehicle according to the sorting.
3. The method according to claim 1 or 2, wherein the extracting first posture feature information of the vehicle to be recognized from the image comprises:
acquiring the driving direction of the vehicle to be identified from the image;
obtaining local characteristic information of visible vehicle parts of the vehicle to be identified from the image;
and taking the driving direction and the local characteristic information as first attitude characteristic information of the vehicle to be recognized.
4. The method of claim 3, wherein the extracting local feature information of visible vehicle components of the vehicle to be identified from the image comprises:
carrying out component classification detection on the image to acquire a detection frame of the visible vehicle component;
and extracting local characteristic information at the corresponding position of the detection frame.
5. The method of claim 3, wherein the obtaining the driving direction of the vehicle to be identified from the image comprises:
extracting the position of the vehicle to be identified from the image, and determining an included angle between the vehicle to be identified and a reference line of the image based on the position;
comparing the included angle with angle ranges of a plurality of candidate driving directions to determine a target angle range of the included angle;
and determining the candidate driving direction corresponding to the target angle range as the driving direction of the vehicle to be identified.
6. The method of claim 3, wherein the obtaining, from the at least one candidate vehicle, a target vehicle matching the vehicle to be recognized based on the first pose feature information comprises:
acquiring second attitude characteristic information of each candidate vehicle;
acquiring attitude similarity between the first attitude characteristic information and each second attitude characteristic information;
and identifying the target vehicle from the at least one candidate vehicle according to the attitude similarity.
7. The method of claim 6, wherein the pose similarity includes a first similarity in a driving direction and a second similarity on visible vehicle components, wherein the identifying the target vehicle from the at least one candidate vehicle based on the pose similarity comprises:
obtaining target candidate vehicles, of which the first similarity and the second similarity both meet respective similarity thresholds, from the at least one candidate vehicle;
and acquiring the number of the target candidate vehicles, raising the similarity threshold in response to the fact that the number is larger than a set numerical value, and reselecting the target candidate vehicles until the number is not larger than the set number.
8. The method of claim 6, wherein said identifying the target vehicle from the at least one candidate vehicle based on the similarity of the vehicle components comprises:
and selecting the candidate vehicle with the maximum attitude similarity from the at least one candidate vehicle as the target vehicle.
9. The method of claim 4, wherein the method further comprises:
determining a visible proportion parameter of the visible vehicle component according to the size of the detection frame and the actual size of the visible vehicle component;
and taking the visible scale parameter as one feature information in the first posture feature information.
10. A vehicle identification device comprising:
the global feature extraction module is used for acquiring an image of a vehicle to be identified and extracting first global feature information of the image;
a candidate vehicle obtaining module for obtaining at least one candidate vehicle based on the first global feature information;
the attitude feature extraction module is used for extracting first attitude feature information of the vehicle to be recognized from the image;
and the target vehicle acquisition module is used for acquiring a target vehicle matched with the vehicle to be recognized from the at least one candidate vehicle based on the first posture characteristic information.
11. The apparatus of claim 10, wherein the candidate vehicle acquisition module is further configured to:
acquiring the similarity between the first global feature information and second global feature information of each vehicle in a database;
and sorting all vehicles in the database according to the similarity, and screening the at least one candidate vehicle according to the sorting.
12. The apparatus of claim 10 or 11, wherein the pose feature extraction module is further configured to:
acquiring the driving direction of the vehicle to be identified from the image;
obtaining local characteristic information of visible vehicle parts of the vehicle to be identified from the image;
and taking the driving direction and the local characteristic information as first attitude characteristic information of the vehicle to be recognized.
13. The apparatus of claim 12, wherein the pose feature extraction module is further configured to:
carrying out component classification detection on the image to acquire a detection frame of the visible vehicle component;
and extracting local characteristic information at the corresponding position of the detection frame.
14. The apparatus of claim 12, wherein the pose feature extraction module is further configured to:
extracting the position of the vehicle to be identified from the image, and determining an included angle between the vehicle to be identified and a reference line of the image based on the position;
comparing the included angle with angle ranges of a plurality of candidate driving directions to determine a target angle range of the included angle;
and determining the candidate driving direction corresponding to the target angle range as the driving direction of the vehicle to be identified.
15. The apparatus of claim 12, wherein the target vehicle acquisition module is further configured to:
acquiring second attitude characteristic information of each candidate vehicle;
acquiring attitude similarity between the first attitude characteristic information and each second attitude characteristic information;
and identifying the target vehicle from the at least one candidate vehicle according to the attitude similarity.
16. The apparatus of claim 15, wherein the target vehicle acquisition module is further configured to:
obtaining target candidate vehicles, of which the first similarity and the second similarity both meet respective similarity thresholds, from the at least one candidate vehicle;
and acquiring the number of the target candidate vehicles, raising the similarity threshold in response to the fact that the number is larger than a set numerical value, and reselecting the target candidate vehicles until the number is not larger than the set number.
17. The apparatus of claim 15, wherein the target vehicle acquisition module is further configured to:
and selecting the candidate vehicle with the maximum attitude similarity from the at least one candidate vehicle as the target vehicle.
18. The apparatus of claim 13, wherein the pose feature extraction module is further configured to:
determining a visible proportion parameter of the visible vehicle component according to the size of the detection frame and the actual size of the visible vehicle component;
and taking the visible scale parameter as one feature information in the first posture feature information.
19. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
20. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-9.
21. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-9.
CN202110722566.5A 2021-06-28 2021-06-28 Vehicle identification method and device, electronic equipment and storage medium Pending CN113569911A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110722566.5A CN113569911A (en) 2021-06-28 2021-06-28 Vehicle identification method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110722566.5A CN113569911A (en) 2021-06-28 2021-06-28 Vehicle identification method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113569911A true CN113569911A (en) 2021-10-29

Family

ID=78162858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110722566.5A Pending CN113569911A (en) 2021-06-28 2021-06-28 Vehicle identification method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113569911A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114297534A (en) * 2022-02-28 2022-04-08 京东方科技集团股份有限公司 Method, system and storage medium for interactively searching target object
CN115205555A (en) * 2022-07-12 2022-10-18 北京百度网讯科技有限公司 Method for determining similar images, training method, information determination method and equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229468A (en) * 2017-06-28 2018-06-29 北京市商汤科技开发有限公司 Vehicle appearance feature recognition and vehicle retrieval method, apparatus, storage medium, electronic equipment
CN108596277A (en) * 2018-05-10 2018-09-28 腾讯科技(深圳)有限公司 A kind of testing vehicle register identification method, apparatus and storage medium
CN109145898A (en) * 2018-07-26 2019-01-04 清华大学深圳研究生院 A kind of object detecting method based on convolutional neural networks and iterator mechanism
CN109508623A (en) * 2018-08-31 2019-03-22 杭州千讯智能科技有限公司 Item identification method and device based on image procossing
CN109584300A (en) * 2018-11-20 2019-04-05 浙江大华技术股份有限公司 A kind of method and device of determining headstock towards angle
CN110458086A (en) * 2019-08-07 2019-11-15 北京百度网讯科技有限公司 Vehicle recognition methods and device again
CN111242088A (en) * 2020-01-22 2020-06-05 上海商汤临港智能科技有限公司 Target detection method and device, electronic equipment and storage medium
CN111611414A (en) * 2019-02-22 2020-09-01 杭州海康威视数字技术股份有限公司 Vehicle retrieval method, device and storage medium
CN111931627A (en) * 2020-08-05 2020-11-13 智慧互通科技有限公司 Vehicle re-identification method and device based on multi-mode information fusion
CN112990217A (en) * 2021-03-24 2021-06-18 北京百度网讯科技有限公司 Image recognition method and device for vehicle, electronic equipment and medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229468A (en) * 2017-06-28 2018-06-29 北京市商汤科技开发有限公司 Vehicle appearance feature recognition and vehicle retrieval method, apparatus, storage medium, electronic equipment
CN108596277A (en) * 2018-05-10 2018-09-28 腾讯科技(深圳)有限公司 A kind of testing vehicle register identification method, apparatus and storage medium
CN109145898A (en) * 2018-07-26 2019-01-04 清华大学深圳研究生院 A kind of object detecting method based on convolutional neural networks and iterator mechanism
CN109508623A (en) * 2018-08-31 2019-03-22 杭州千讯智能科技有限公司 Item identification method and device based on image procossing
CN109584300A (en) * 2018-11-20 2019-04-05 浙江大华技术股份有限公司 A kind of method and device of determining headstock towards angle
CN111611414A (en) * 2019-02-22 2020-09-01 杭州海康威视数字技术股份有限公司 Vehicle retrieval method, device and storage medium
CN110458086A (en) * 2019-08-07 2019-11-15 北京百度网讯科技有限公司 Vehicle recognition methods and device again
CN111242088A (en) * 2020-01-22 2020-06-05 上海商汤临港智能科技有限公司 Target detection method and device, electronic equipment and storage medium
CN111931627A (en) * 2020-08-05 2020-11-13 智慧互通科技有限公司 Vehicle re-identification method and device based on multi-mode information fusion
CN112990217A (en) * 2021-03-24 2021-06-18 北京百度网讯科技有限公司 Image recognition method and device for vehicle, electronic equipment and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114297534A (en) * 2022-02-28 2022-04-08 京东方科技集团股份有限公司 Method, system and storage medium for interactively searching target object
CN115205555A (en) * 2022-07-12 2022-10-18 北京百度网讯科技有限公司 Method for determining similar images, training method, information determination method and equipment

Similar Documents

Publication Publication Date Title
CN113221677B (en) Track abnormality detection method and device, road side equipment and cloud control platform
CN113902897A (en) Training of target detection model, target detection method, device, equipment and medium
CN113920307A (en) Model training method, device, equipment, storage medium and image detection method
CN113177968A (en) Target tracking method and device, electronic equipment and storage medium
CN113139543A (en) Training method of target object detection model, target object detection method and device
CN110675635B (en) Method and device for acquiring external parameters of camera, electronic equipment and storage medium
CN113642431A (en) Training method and device of target detection model, electronic equipment and storage medium
CN113947188A (en) Training method of target detection network and vehicle detection method
CN112528858A (en) Training method, device, equipment, medium and product of human body posture estimation model
CN113792742A (en) Semantic segmentation method of remote sensing image and training method of semantic segmentation model
CN113569911A (en) Vehicle identification method and device, electronic equipment and storage medium
CN114648676A (en) Point cloud processing model training and point cloud instance segmentation method and device
CN114332977A (en) Key point detection method and device, electronic equipment and storage medium
CN113569912A (en) Vehicle identification method and device, electronic equipment and storage medium
CN113378712A (en) Training method of object detection model, image detection method and device thereof
CN113378857A (en) Target detection method and device, electronic equipment and storage medium
CN116310993A (en) Target detection method, device, equipment and storage medium
CN115147809A (en) Obstacle detection method, device, equipment and storage medium
CN114581794A (en) Geographic digital twin information acquisition method and device, electronic equipment and storage medium
CN114169425A (en) Training target tracking model and target tracking method and device
CN113705381A (en) Target detection method and device in foggy days, electronic equipment and storage medium
CN113591569A (en) Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium
CN114549961B (en) Target object detection method, device, equipment and storage medium
CN116188587A (en) Positioning method and device and vehicle
CN113920273B (en) Image processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination