CN118155264A - Vehicle inspection method, device, terminal and storage medium - Google Patents

Vehicle inspection method, device, terminal and storage medium Download PDF

Info

Publication number
CN118155264A
CN118155264A CN202410356671.5A CN202410356671A CN118155264A CN 118155264 A CN118155264 A CN 118155264A CN 202410356671 A CN202410356671 A CN 202410356671A CN 118155264 A CN118155264 A CN 118155264A
Authority
CN
China
Prior art keywords
face
segmentation
face segmentation
suspicious
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410356671.5A
Other languages
Chinese (zh)
Inventor
杜乾
许勇
王泽政
杜鹏
马亮
张红丽
纪超丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei Hanjia Electronic Technology Co ltd
Original Assignee
Hebei Hanjia Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei Hanjia Electronic Technology Co ltd filed Critical Hebei Hanjia Electronic Technology Co ltd
Priority to CN202410356671.5A priority Critical patent/CN118155264A/en
Publication of CN118155264A publication Critical patent/CN118155264A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a vehicle inspection method, a vehicle inspection device, a terminal and a storage medium. The method comprises the following steps: when a vehicle is about to drive into an inspection area, acquiring a face snap image; performing scene detection on the face snapshot image according to a preset scene detection network, and determining a snapshot scene corresponding to the face snapshot image; performing face recognition on the face snap image according to a preset face recognition network to obtain an initial face recognition result corresponding to the face snap image; and correcting the initial face recognition result according to the snap scene to obtain a target face recognition result corresponding to the face snap image. According to the invention, the target face recognition result corresponding to the face snap image can be accurately obtained according to different snap scenes, so that the possibility that misjudgment or misjudgment of face recognition equipment in the vehicle inspection process affects the inspection efficiency is reduced.

Description

Vehicle inspection method, device, terminal and storage medium
Technical Field
The present invention relates to the field of intelligent security inspection technologies, and in particular, to a vehicle inspection method, device, terminal, and storage medium.
Background
The vehicle inspection is of great significance in maintaining regional stability and social security, and with the development of intelligent equipment, the vehicle inspection is currently generally performed by various vehicle inspection equipment.
In general, devices that relate to vehicle inspection include vehicle identification devices, face recognition devices, credential identification devices, and the like. While the use of multiple vehicle inspection devices increases the degree of automation of vehicle inspection, in some scenarios, misjudgment or misjudgment of a certain inspection device may affect inspection efficiency to some extent. For example, when the face recognition device is used to count the members in the vehicle, multiple marks or neglect marks may occur, so that the vehicle cannot pass smoothly.
Disclosure of Invention
The embodiment of the invention provides a vehicle inspection method, a device, a terminal and a storage medium, which are used for solving the problem that misjudgment or misjudgment of face recognition equipment in the current vehicle inspection process possibly affects inspection efficiency.
In a first aspect, an embodiment of the present invention provides a vehicle inspection method, including:
When a vehicle is about to drive into an inspection area, acquiring a face snap image;
performing scene detection on the face snapshot image according to a preset scene detection network, and determining a snapshot scene corresponding to the face snapshot image;
Performing face recognition on the face snap image according to a preset face recognition network to obtain an initial face recognition result corresponding to the face snap image;
And correcting the initial face recognition result according to the snap scene to obtain a target face recognition result corresponding to the face snap image.
In one possible implementation, the initial face recognition result includes a face segmentation result and a face segmentation confidence;
Correcting the initial face recognition result according to the snapshot scene to obtain a target face recognition result corresponding to the face snapshot image, wherein the method comprises the following steps of:
Determining suspicious face segmentation results and trusted face segmentation results in the face segmentation results according to standard face segmentation results corresponding to the snapshot scene;
Correcting the suspicious face segmentation result according to the standard face segmentation result, and correcting the suspicious face segmentation confidence corresponding to the suspicious face segmentation result according to the standard segmentation confidence corresponding to the standard face segmentation result to obtain a corrected face recognition result;
And obtaining a target face recognition result corresponding to the face snap image according to the trusted face segmentation result, the trusted face segmentation confidence corresponding to the trusted face segmentation result and the corrected face recognition result.
In a possible implementation manner, determining a suspicious face segmentation result and a trusted face segmentation result in the face segmentation result according to a standard face segmentation result corresponding to the snap scene includes:
Calculating cosine similarity and Euclidean distance between a standard face segmentation result corresponding to the snapshot scene and each face segmentation result, and carrying out convolution extraction on the standard face segmentation result corresponding to the snapshot scene and each face segmentation result to obtain local information similarity between the standard face segmentation result corresponding to the snapshot scene and each face segmentation result;
carrying out weighted summation on the cosine similarity, the Euclidean distance and the local information similarity corresponding to each face segmentation result to obtain the similarity corresponding to each face segmentation result;
and determining suspicious face segmentation results and trusted face segmentation results in the face segmentation results according to the similarity corresponding to the face segmentation results and a preset similarity threshold.
In one possible implementation manner, correcting the suspicious face segmentation confidence corresponding to the suspicious face segmentation result according to the standard segmentation confidence corresponding to the standard face segmentation result includes:
judging whether the confidence coefficient of the suspicious face segmentation is larger than a confidence coefficient threshold value or not;
And if the suspicious face segmentation confidence coefficient is larger than the confidence coefficient threshold, regulating and controlling the standard segmentation confidence coefficient corresponding to the standard face segmentation result according to the target regulation and control coefficient corresponding to the suspicious face segmentation confidence coefficient, and taking the regulated and controlled standard segmentation confidence coefficient as the corrected face segmentation confidence coefficient corresponding to the suspicious face segmentation result.
In one possible implementation manner, after determining whether the confidence level of the suspicious face segmentation is greater than a confidence threshold, the method further includes:
And if the confidence coefficient of the suspicious face segmentation is smaller than or equal to the confidence coefficient threshold value, taking the standard segmentation confidence coefficient corresponding to the standard face segmentation result as the corrected face segmentation confidence coefficient corresponding to the suspicious face segmentation result.
In one possible implementation manner, the target regulation and control coefficient is determined by looking up a table in a correspondence table between the suspicious face segmentation confidence and a preset regulation and control coefficient.
In one possible implementation manner, the training process of the preset scene detection network includes:
acquiring corresponding face snap images when the vehicle is empty, light-loaded, full-loaded and overloaded to form a training set;
training the initial clustering network according to the training set to obtain a preset scene detection network.
In a second aspect, an embodiment of the present invention provides a vehicle inspection device including:
the acquisition module is used for acquiring a face snap image when the vehicle is about to drive into the inspection area;
The first processing module is used for carrying out scene detection on the face snapshot image according to a preset scene detection network and determining a snapshot scene corresponding to the face snapshot image;
the second processing module is used for carrying out face recognition on the face snap image according to a preset face recognition network to obtain an initial face recognition result corresponding to the face snap image;
and the third processing module is used for correcting the initial face recognition result according to the snapshot scene to obtain a target face recognition result corresponding to the face snapshot image.
In a third aspect, an embodiment of the present invention provides a terminal, including a memory for storing a computer program and a processor for calling and running the computer program stored in the memory, to perform the steps of the method as described above in the first aspect or any possible implementation manner of the first aspect.
In a fourth aspect, embodiments of the present invention provide a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method as described above in the first aspect or any one of the possible implementations of the first aspect.
The embodiment of the invention provides a vehicle inspection method, a device, a terminal and a storage medium, wherein when a vehicle is about to drive into an inspection area, a face snapshot image is obtained, then scene detection is carried out on the face snapshot image according to a preset scene detection network, and a snapshot scene corresponding to the face snapshot image is determined; performing face recognition on the face snap image according to a preset face recognition network to obtain an initial face recognition result corresponding to the face snap image; and correcting the initial face recognition result according to the snapshot scene to obtain a target face recognition result corresponding to the face snapshot image. Therefore, target face recognition results corresponding to the face snap images are accurately obtained according to different snap scenes, and the possibility that misjudgment or misjudgment of face recognition equipment in the vehicle inspection process affects inspection efficiency is further reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of an implementation of a vehicle inspection method provided by an embodiment of the present invention;
fig. 2 is a schematic structural view of a vehicle inspection device according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the following description will be made by way of specific embodiments with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of an implementation of a vehicle inspection method provided by an embodiment of the present invention is shown, and details are as follows:
In step 101, a face snapshot is acquired when the vehicle is about to drive into an inspection area.
In general, when a vehicle is about to enter an inspection area for inspection, the vehicle and an in-vehicle member can be inspected in combination with a vehicle recognition device, a face recognition device, a certificate recognition device, and the like. The human face recognition device can also be directly used for checking the personnel in the car, such as overload checking of buses, school buses, minibuses, sedans and the like, illegal owner checking and the like.
Optionally, in order to improve the applicability of the obtained face snapshot image, when the vehicle is about to drive into the inspection area, the vehicle type can be identified through the vehicle identification device, the height of the face identification device is adjusted according to the identified vehicle type, and then the face snapshot image is obtained based on the face identification device with the adjusted height.
In step 102, scene detection is performed on the face snapshot image according to a preset scene detection network, and a snapshot scene corresponding to the face snapshot image is determined.
In this embodiment, considering the face snap image acquired by the face recognition device, there may be overlapping, shielding, etc. problems, and thus when the face recognition device counts the in-vehicle members, multiple marks or neglected marks may occur, which affects the vehicle inspection efficiency. Therefore, the preset scene detection network is obtained in advance, so that scene detection is carried out on the face snapshot image according to the preset scene detection network, and the snapshot scene corresponding to the face snapshot image is determined, so that in-vehicle personnel can be counted more accurately according to the snapshot scene corresponding to the face snapshot image.
Optionally, the training process of the preset scene detection network may include:
And acquiring corresponding face snap images when the vehicle is in idle load, light load, full load and overload to form a training set, and training the initial clustering network according to the training set to obtain a preset scene detection network.
In this embodiment, when scene detection is performed on face snapshot images, because there is a large difference between face snapshot images corresponding to no-load, light-load, full-load and overload, a preset scene detection network is obtained based on training of face snapshot images corresponding to no-load, light-load, full-load and overload of a vehicle, so that a snapshot scene corresponding to a currently obtained face snapshot image is determined according to the preset scene detection network.
In step 103, face recognition is performed on the face snap image according to a preset face recognition network, and an initial face recognition result corresponding to the face snap image is obtained.
In step 104, the initial face recognition result is modified according to the snapshot scene, and the target face recognition result corresponding to the face snapshot image is obtained.
In this embodiment, in addition to performing scene detection on a face snapshot image according to a preset scene detection network and determining a snapshot scene corresponding to the face snapshot image, face recognition is performed on the face snapshot image according to the preset face recognition network to obtain an initial face recognition result corresponding to the face snapshot image, and then the initial face recognition result can be corrected according to the snapshot scene to obtain a more accurate target face recognition result corresponding to the face snapshot image.
According to the embodiment of the invention, when a vehicle is about to drive into an inspection area, a face snapshot image is obtained, then scene detection is carried out on the face snapshot image according to a preset scene detection network, and a snapshot scene corresponding to the face snapshot image is determined; performing face recognition on the face snap image according to a preset face recognition network to obtain an initial face recognition result corresponding to the face snap image; and correcting the initial face recognition result according to the snapshot scene to obtain a target face recognition result corresponding to the face snapshot image. Therefore, target face recognition results corresponding to the face snap images are accurately obtained according to different snap scenes, and the possibility that misjudgment or misjudgment of face recognition equipment in the vehicle inspection process affects inspection efficiency is further reduced.
Alternatively, the initial face recognition result may include a face segmentation result and a face segmentation confidence.
Correcting the initial face recognition result according to the snapshot scene to obtain a target face recognition result corresponding to the face snapshot image, which may include:
and determining suspicious face segmentation results and trusted face segmentation results in the face segmentation results according to the standard face segmentation results corresponding to the snap scene.
Correcting the suspicious face segmentation result according to the standard face segmentation result, and correcting the suspicious face segmentation confidence corresponding to the suspicious face segmentation result according to the standard segmentation confidence corresponding to the standard face segmentation result to obtain a corrected face recognition result.
And obtaining a target face recognition result corresponding to the face snap image according to the trusted face segmentation result and the trusted face segmentation confidence corresponding to the trusted face segmentation result and the corrected face recognition result.
The process of performing face recognition on the face snapshot image according to the preset face recognition network can be understood as a process of performing face segmentation on the face snapshot image, and after the face segmentation is performed on the face snapshot image by the preset face recognition network, the confidence of each segmentation result, namely the face segmentation confidence, can be obtained. Because the obtained face snapshot image is generally one of the snapshot scenes of the vehicle in no-load, light-load, full-load, overload and the like, the suspicious face segmentation result and the trusted face segmentation result can be identified by utilizing the standard face segmentation result of the snapshot scene corresponding to the face snapshot image, and the suspicious face segmentation result and the suspicious face segmentation confidence are corrected by utilizing the standard face segmentation result and the standard segmentation confidence, so that the final target face recognition result with higher reliability is obtained.
Optionally, determining the suspicious face segmentation result and the trusted face segmentation result in the face segmentation result according to the standard face segmentation result corresponding to the snap scene may include:
And calculating cosine similarity and Euclidean distance between a standard face segmentation result corresponding to the snapshot scene and each face segmentation result, and carrying out convolution extraction on the standard face segmentation result corresponding to the snapshot scene and each face segmentation result to obtain local information similarity between the standard face segmentation result corresponding to the snapshot scene and each face segmentation result.
And carrying out weighted summation on cosine similarity, euclidean distance and local information similarity corresponding to each face segmentation result to obtain the similarity corresponding to each face segmentation result.
And determining suspicious face segmentation results and trusted face segmentation results which are not matched with the standard face segmentation results in the face segmentation results according to the similarity corresponding to each face segmentation result and a preset similarity threshold.
In this embodiment, in order to accurately determine a suspicious face segmentation result and a trusted face segmentation result in a face segmentation result, the cosine similarity and the euclidean distance between the standard face segmentation result corresponding to the snapshot scene and each face segmentation result are calculated by considering a plurality of angles of the standard face segmentation result corresponding to the snapshot scene and each face segmentation result, convolution extraction is performed on the standard face segmentation result corresponding to the snapshot scene and each face segmentation result, and the local information similarity between the standard face segmentation result corresponding to the snapshot scene and each face segmentation result is obtained, so that the similarity corresponding to each face segmentation result is obtained according to the cosine similarity, the euclidean distance and the local information similarity corresponding to each face segmentation result, and the suspicious face segmentation result and the trusted face segmentation result in the face segmentation result are determined according to the similarity corresponding to each face segmentation result and a preset similarity threshold.
For example, if the similarity corresponding to the face segmentation result is greater than the preset similarity threshold, it may be determined that the face segmentation result is a trusted face segmentation result. If the similarity corresponding to the face segmentation result is smaller than or equal to a preset similarity threshold value, the face segmentation result can be determined to be a suspicious face segmentation result.
Optionally, correcting the suspicious face segmentation confidence corresponding to the suspicious face segmentation result according to the standard segmentation confidence corresponding to the standard face segmentation result may include:
And judging whether the confidence coefficient of the suspicious face segmentation is larger than a confidence coefficient threshold value.
If the suspicious face segmentation confidence is greater than the confidence threshold, regulating and controlling the standard segmentation confidence corresponding to the standard face segmentation result according to the target regulation and control coefficient corresponding to the suspicious face segmentation confidence, and taking the regulated and controlled standard segmentation confidence as the corrected face segmentation confidence corresponding to the suspicious face segmentation result.
In this embodiment, when the suspicious face segmentation confidence corresponding to the suspicious face segmentation result is corrected according to the standard segmentation confidence corresponding to the standard face segmentation result, whether the suspicious face segmentation confidence is greater than a confidence threshold may be determined first, if the suspicious face segmentation confidence is greater than the confidence threshold, it indicates that the current face snap image is not messy, and the obtained suspicious face segmentation result in the face segmentation result has a certain confidence, so that the standard segmentation confidence corresponding to the standard face segmentation result may be regulated according to the target regulation and control coefficient corresponding to the suspicious face segmentation confidence, and the regulated standard segmentation confidence may be used as the corrected face segmentation confidence corresponding to the suspicious face segmentation result.
Alternatively, the target regulation and control coefficient can be determined by looking up a table in a correspondence table between the suspicious face segmentation confidence and the preset regulation and control coefficient.
Optionally, after determining whether the confidence level of the suspicious face segmentation is greater than the confidence threshold, the method may further include: if the confidence coefficient of the suspicious face segmentation is smaller than or equal to the confidence coefficient threshold value, the standard segmentation confidence coefficient corresponding to the standard face segmentation result is used as the corrected face segmentation confidence coefficient corresponding to the suspicious face segmentation result.
In this embodiment, after judging whether the confidence level of suspicious face segmentation is greater than the confidence level threshold, if the confidence level of suspicious face segmentation is less than or equal to the confidence level threshold, the method indicates that the confusion degree of the current face snap image is greater, and the confidence level of the suspicious face segmentation result in the obtained face segmentation result is smaller, then the standard segmentation confidence level corresponding to the standard face segmentation result can be directly used as the corrected face segmentation confidence level corresponding to the suspicious face segmentation result.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
The following are device embodiments of the invention, for details not described in detail therein, reference may be made to the corresponding method embodiments described above.
Fig. 2 shows a schematic structural diagram of a vehicle inspection device according to an embodiment of the present invention, and for convenience of explanation, only the portions related to the embodiment of the present invention are shown in detail as follows:
as shown in fig. 2, the vehicle inspection device includes: an acquisition module 21, a first processing module 22, a second processing module 23 and a third processing module 24.
An acquisition module 21 for acquiring a face snapshot image when the vehicle is about to drive into the inspection area;
the first processing module 22 is configured to perform scene detection on the face snapshot image according to a preset scene detection network, and determine a snapshot scene corresponding to the face snapshot image;
the second processing module 23 is configured to perform face recognition on the face snap image according to a preset face recognition network, so as to obtain an initial face recognition result corresponding to the face snap image;
And the third processing module 24 is configured to correct the initial face recognition result according to the snapshot scene, and obtain a target face recognition result corresponding to the face snapshot image.
According to the embodiment of the invention, when a vehicle is about to drive into an inspection area, a face snapshot image is obtained, then scene detection is carried out on the face snapshot image according to a preset scene detection network, and a snapshot scene corresponding to the face snapshot image is determined; performing face recognition on the face snap image according to a preset face recognition network to obtain an initial face recognition result corresponding to the face snap image; and correcting the initial face recognition result according to the snapshot scene to obtain a target face recognition result corresponding to the face snapshot image. Therefore, target face recognition results corresponding to the face snap images are accurately obtained according to different snap scenes, and the possibility that misjudgment or misjudgment of face recognition equipment in the vehicle inspection process affects inspection efficiency is further reduced.
In one possible implementation, the initial face recognition result includes a face segmentation result and a face segmentation confidence;
The third processing module 24 may be configured to determine a suspicious face segmentation result and a trusted face segmentation result in the face segmentation result according to a standard face segmentation result corresponding to the snapshot scene;
Correcting the suspicious face segmentation result according to the standard face segmentation result, and correcting the suspicious face segmentation confidence corresponding to the suspicious face segmentation result according to the standard segmentation confidence corresponding to the standard face segmentation result to obtain a corrected face recognition result;
And obtaining a target face recognition result corresponding to the face snap image according to the trusted face segmentation result, the trusted face segmentation confidence corresponding to the trusted face segmentation result and the corrected face recognition result.
In a possible implementation manner, the third processing module 24 may be configured to calculate cosine similarity and euclidean distance between the standard face segmentation result corresponding to the snapshot scene and each face segmentation result, and perform convolution extraction on the standard face segmentation result corresponding to the snapshot scene and each face segmentation result, so as to obtain local information similarity between the standard face segmentation result corresponding to the snapshot scene and each face segmentation result;
carrying out weighted summation on the cosine similarity, the Euclidean distance and the local information similarity corresponding to each face segmentation result to obtain the similarity corresponding to each face segmentation result;
and determining suspicious face segmentation results and trusted face segmentation results in the face segmentation results according to the similarity corresponding to the face segmentation results and a preset similarity threshold.
In one possible implementation, the third processing module 24 may be configured to determine whether the confidence level of the suspicious face segmentation is greater than a confidence threshold;
And if the suspicious face segmentation confidence coefficient is larger than the confidence coefficient threshold, regulating and controlling the standard segmentation confidence coefficient corresponding to the standard face segmentation result according to the target regulation and control coefficient corresponding to the suspicious face segmentation confidence coefficient, and taking the regulated and controlled standard segmentation confidence coefficient as the corrected face segmentation confidence coefficient corresponding to the suspicious face segmentation result.
In a possible implementation manner, the third processing module 24 may be further configured to use the standard segmentation confidence corresponding to the standard face segmentation result as the corrected face segmentation confidence corresponding to the suspicious face segmentation result if the suspicious face segmentation confidence is less than or equal to the confidence threshold.
In one possible implementation manner, the target regulation and control coefficient is determined by looking up a table in a correspondence table between the suspicious face segmentation confidence and a preset regulation and control coefficient.
In one possible implementation manner, the training process of the preset scene detection network includes:
acquiring corresponding face snap images when the vehicle is empty, light-loaded, full-loaded and overloaded to form a training set;
training the initial clustering network according to the training set to obtain a preset scene detection network.
Fig. 3 is a schematic diagram of a terminal according to an embodiment of the present invention. As shown in fig. 3, the terminal 3 of this embodiment includes: a processor 30, a memory 31 and a computer program 32 stored in the memory 31 and executable on the processor 30. The steps of the various vehicle inspection method embodiments described above, such as steps 101 through 104 shown in fig. 1, are implemented when the processor 30 executes the computer program 32. Or the processor 30, when executing the computer program 32, performs the functions of the modules/units of the apparatus embodiments described above, such as the functions of the modules/units 21 to 24 shown in fig. 2.
By way of example, the computer program 32 may be partitioned into one or more modules/units that are stored in the memory 31 and executed by the processor 30 to complete the present invention. One or more of the modules/units may be a series of computer program instruction segments capable of performing a specific function for describing the execution of the computer program 32 in the terminal 3. For example, the computer program 32 may be split into the modules/units 21 to 24 shown in fig. 2.
The terminal 3 may be a computing device such as a desktop computer, a notebook computer, a palm computer, and a cloud server. The terminal 3 may include, but is not limited to, a processor 30, a memory 31. It will be appreciated by those skilled in the art that fig. 3 is merely an example of the terminal 3 and is not limiting of the terminal 3, and may include more or fewer components than shown, or may combine some components, or different components, e.g., the terminal may further include an input-output device, a network access device, a bus, etc.
The Processor 30 may be a central processing unit (Central Processing Unit, CPU), other general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 31 may be an internal storage unit of the terminal 3, such as a hard disk or a memory of the terminal 3. The memory 31 may also be an external storage device of the terminal 3, such as a plug-in hard disk provided on the terminal 3, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), or the like. Further, the memory 31 may also include both an internal storage unit and an external storage device of the terminal 3. The memory 31 is used to store computer programs and other programs and data required by the terminal. The memory 31 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal and method may be implemented in other manners. For example, the apparatus/terminal embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program for instructing related hardware, and the computer program may be stored in a computer readable storage medium, and the computer program may implement the steps of each of the method embodiments for vehicle inspection described above when being executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (10)

1. A vehicle inspection method, characterized by comprising:
When a vehicle is about to drive into an inspection area, acquiring a face snap image;
performing scene detection on the face snapshot image according to a preset scene detection network, and determining a snapshot scene corresponding to the face snapshot image;
Performing face recognition on the face snap image according to a preset face recognition network to obtain an initial face recognition result corresponding to the face snap image;
And correcting the initial face recognition result according to the snap scene to obtain a target face recognition result corresponding to the face snap image.
2. The vehicle inspection method according to claim 1, wherein the initial face recognition result includes a face segmentation result and a face segmentation confidence;
Correcting the initial face recognition result according to the snapshot scene to obtain a target face recognition result corresponding to the face snapshot image, wherein the method comprises the following steps of:
Determining suspicious face segmentation results and trusted face segmentation results in the face segmentation results according to standard face segmentation results corresponding to the snapshot scene;
Correcting the suspicious face segmentation result according to the standard face segmentation result, and correcting the suspicious face segmentation confidence corresponding to the suspicious face segmentation result according to the standard segmentation confidence corresponding to the standard face segmentation result to obtain a corrected face recognition result;
And obtaining a target face recognition result corresponding to the face snap image according to the trusted face segmentation result, the trusted face segmentation confidence corresponding to the trusted face segmentation result and the corrected face recognition result.
3. The vehicle inspection method according to claim 2, wherein determining suspicious face segmentation results and trusted face segmentation results from the face segmentation results according to standard face segmentation results corresponding to the snap scene comprises:
Calculating cosine similarity and Euclidean distance between a standard face segmentation result corresponding to the snapshot scene and each face segmentation result, and carrying out convolution extraction on the standard face segmentation result corresponding to the snapshot scene and each face segmentation result to obtain local information similarity between the standard face segmentation result corresponding to the snapshot scene and each face segmentation result;
carrying out weighted summation on the cosine similarity, the Euclidean distance and the local information similarity corresponding to each face segmentation result to obtain the similarity corresponding to each face segmentation result;
and determining suspicious face segmentation results and trusted face segmentation results in the face segmentation results according to the similarity corresponding to the face segmentation results and a preset similarity threshold.
4. The vehicle inspection method according to claim 2, wherein correcting the suspicious face segmentation confidence corresponding to the suspicious face segmentation result according to the standard segmentation confidence corresponding to the standard face segmentation result includes:
judging whether the confidence coefficient of the suspicious face segmentation is larger than a confidence coefficient threshold value or not;
And if the suspicious face segmentation confidence coefficient is larger than the confidence coefficient threshold, regulating and controlling the standard segmentation confidence coefficient corresponding to the standard face segmentation result according to the target regulation and control coefficient corresponding to the suspicious face segmentation confidence coefficient, and taking the regulated and controlled standard segmentation confidence coefficient as the corrected face segmentation confidence coefficient corresponding to the suspicious face segmentation result.
5. The vehicle inspection method according to claim 4, further comprising, after determining whether the suspicious face segmentation confidence is greater than a confidence threshold:
And if the confidence coefficient of the suspicious face segmentation is smaller than or equal to the confidence coefficient threshold value, taking the standard segmentation confidence coefficient corresponding to the standard face segmentation result as the corrected face segmentation confidence coefficient corresponding to the suspicious face segmentation result.
6. The vehicle inspection method according to claim 4, wherein the target regulation factor is determined by looking up a table in a correspondence table of the suspicious face segmentation confidence and a preset regulation factor.
7. The vehicle inspection method according to any one of claims 1 to 6, wherein the training process of the preset scene detection network includes:
acquiring corresponding face snap images when the vehicle is empty, light-loaded, full-loaded and overloaded to form a training set;
training the initial clustering network according to the training set to obtain a preset scene detection network.
8. A vehicle inspection device, characterized by comprising:
the acquisition module is used for acquiring a face snap image when the vehicle is about to drive into the inspection area;
The first processing module is used for carrying out scene detection on the face snapshot image according to a preset scene detection network and determining a snapshot scene corresponding to the face snapshot image;
the second processing module is used for carrying out face recognition on the face snap image according to a preset face recognition network to obtain an initial face recognition result corresponding to the face snap image;
and the third processing module is used for correcting the initial face recognition result according to the snapshot scene to obtain a target face recognition result corresponding to the face snapshot image.
9. A terminal comprising a memory for storing a computer program and a processor for invoking and running the computer program stored in the memory to perform the method of any of claims 1 to 7.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any of the preceding claims 1 to 7.
CN202410356671.5A 2024-03-27 2024-03-27 Vehicle inspection method, device, terminal and storage medium Pending CN118155264A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410356671.5A CN118155264A (en) 2024-03-27 2024-03-27 Vehicle inspection method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410356671.5A CN118155264A (en) 2024-03-27 2024-03-27 Vehicle inspection method, device, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN118155264A true CN118155264A (en) 2024-06-07

Family

ID=91290494

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410356671.5A Pending CN118155264A (en) 2024-03-27 2024-03-27 Vehicle inspection method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN118155264A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020052436A1 (en) * 2018-09-12 2020-03-19 杭州海康威视数字技术股份有限公司 Vehicle overload alarming method and apparatus, electronic device, and storage medium
CN111354121A (en) * 2020-03-09 2020-06-30 中通服公众信息产业股份有限公司 Man-car hybrid verification system and method for intelligent inspection station
CN111950499A (en) * 2020-08-21 2020-11-17 湖北民族大学 Method for detecting vehicle-mounted personnel statistical information
CN113920575A (en) * 2021-12-15 2022-01-11 深圳佑驾创新科技有限公司 Facial expression recognition method and device and storage medium
CN113963407A (en) * 2021-10-22 2022-01-21 中国银行股份有限公司 Face recognition result judgment method and device based on business scene
CN115424217A (en) * 2022-08-31 2022-12-02 东方世纪科技股份有限公司 AI vision-based intelligent vehicle identification method and device and electronic equipment
CN116778418A (en) * 2023-06-25 2023-09-19 南京师范大学 Self-adaptive people counting method considering monitoring observation scale and crowd density

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020052436A1 (en) * 2018-09-12 2020-03-19 杭州海康威视数字技术股份有限公司 Vehicle overload alarming method and apparatus, electronic device, and storage medium
CN111354121A (en) * 2020-03-09 2020-06-30 中通服公众信息产业股份有限公司 Man-car hybrid verification system and method for intelligent inspection station
CN111950499A (en) * 2020-08-21 2020-11-17 湖北民族大学 Method for detecting vehicle-mounted personnel statistical information
CN113963407A (en) * 2021-10-22 2022-01-21 中国银行股份有限公司 Face recognition result judgment method and device based on business scene
CN113920575A (en) * 2021-12-15 2022-01-11 深圳佑驾创新科技有限公司 Facial expression recognition method and device and storage medium
CN115424217A (en) * 2022-08-31 2022-12-02 东方世纪科技股份有限公司 AI vision-based intelligent vehicle identification method and device and electronic equipment
CN116778418A (en) * 2023-06-25 2023-09-19 南京师范大学 Self-adaptive people counting method considering monitoring observation scale and crowd density

Similar Documents

Publication Publication Date Title
CN113705462B (en) Face recognition method, device, electronic equipment and computer readable storage medium
CN112629828B (en) Optical information detection method, device and equipment
CN108182444A (en) The method and device of video quality diagnosis based on scene classification
CN112488054B (en) Face recognition method, device, terminal equipment and storage medium
CN111369790B (en) Vehicle passing record correction method, device, equipment and storage medium
CN110969640A (en) Video image segmentation method, terminal device and computer-readable storage medium
CN118155264A (en) Vehicle inspection method, device, terminal and storage medium
CN112416128B (en) Gesture recognition method and terminal equipment
US20230245421A1 (en) Face clustering method and apparatus, classification storage method, medium and electronic device
CN113989778A (en) Vehicle information matching method and device, terminal equipment and storage medium
CN113705626A (en) Method and device for identifying abnormal life guarantee application families and electronic equipment
CN113919421A (en) Method, device and equipment for adjusting target detection model
CN112270257A (en) Motion trajectory determination method and device and computer readable storage medium
CN110675268A (en) Risk client identification method and device and server
CN116912634B (en) Training method and device for target tracking model
CN113673268B (en) Identification method, system and equipment for different brightness
CN116664416B (en) Lei Dadian cloud data processing method and device, electronic equipment and storage medium
CN114973468B (en) Gate control method, device, equipment and storage medium
CN111669104A (en) Motor driving method, device, terminal and storage medium
CN116994174A (en) Video identification method, device, equipment and storage medium
CN116071586A (en) Image screening method, device, electronic equipment and computer readable storage medium
CN118262324A (en) Target detection method and device, terminal equipment and unmanned equipment
CN117710765A (en) Target recognition method, device, electronic equipment and computer readable storage medium
CN118314275A (en) Three-dimensional scene management method, device, server and computer readable storage medium
CN118447347A (en) Image processing method, device, terminal equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination