CN111985304A - Patrol alarm method, system, terminal equipment and storage medium - Google Patents

Patrol alarm method, system, terminal equipment and storage medium Download PDF

Info

Publication number
CN111985304A
CN111985304A CN202010630103.1A CN202010630103A CN111985304A CN 111985304 A CN111985304 A CN 111985304A CN 202010630103 A CN202010630103 A CN 202010630103A CN 111985304 A CN111985304 A CN 111985304A
Authority
CN
China
Prior art keywords
image
target
patrol
alarm
blacklist
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010630103.1A
Other languages
Chinese (zh)
Inventor
王维治
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Infineon Information Co ltd
Original Assignee
Shenzhen Infinova Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Infinova Intelligent Technology Co Ltd filed Critical Shenzhen Infinova Intelligent Technology Co Ltd
Priority to CN202010630103.1A priority Critical patent/CN111985304A/en
Publication of CN111985304A publication Critical patent/CN111985304A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B23/00Alarms responsive to unspecified undesired or abnormal conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a patrol alarm method, a patrol alarm system, terminal equipment and a storage medium, which are suitable for the technical field of image processing and can effectively provide information alarm. The method comprises the following steps: inputting a regional image containing a target object acquired in the patrol process into a trained detection model for processing, and determining a first target image existing in the regional image; matching the first target image with a blacklist image in a blacklist image library; and if the matching is successful, generating alarm information and sending the alarm information to an alarm terminal, wherein the alarm information is used for indicating the alarm terminal to send an alarm.

Description

Patrol alarm method, system, terminal equipment and storage medium
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a patrol alarm method, a system, terminal equipment and a storage medium.
Background
Along with the development of economy and society, the trip and the flow of personnel are more and more frequent, for the density that has increased public occasion patrol, provide the guarantee for people's safe trip, will establish a high-efficient, safe, the leading people's patrol system in public occasion mostly. For example, in public places such as tourist attractions, parks, stations and the like, security personnel can take the patrol car to perform patrol work in the public places so as to conveniently handle emergencies in the public places, such as stealing, hurting people and the like.
However, many of the existing patrol modes are that the security personnel manually discover and process the sudden illegal and illegal events in the public places, but cannot discover the illegal and illegal personnel and vehicles hidden in the public places in advance, so as to perform intervention treatment in advance, or perform other treatment work around the illegal personnel and vehicles. Therefore, the existing patrol mode cannot provide effective information alarm.
Disclosure of Invention
The embodiment of the application provides a patrol alarm method, a patrol alarm system, terminal equipment and a storage medium, and aims to solve the problem that an existing patrol mode cannot effectively provide information alarm.
In a first aspect, an embodiment of the present application provides a patrol alarm method, including:
inputting a regional image containing a target object acquired in the patrol process into a trained detection model for processing, and determining a first target image existing in the regional image;
matching the first target image with a blacklist image in a blacklist image library;
and if the matching is successful, generating alarm information and sending the alarm information to an alarm terminal, wherein the alarm information is used for indicating the alarm terminal to send an alarm.
According to the patrol warning method provided by the application, firstly, the regional image containing the target object is collected in the patrol process and input to the trained detection model for processing, the first target image existing in the regional image is determined, the first target image can be used for describing the actual target object existing in the patrol process, so that the first target image is matched with the blacklist image in the blacklist image library, and if the matching is successful, warning information is generated and sent to a warning terminal, so that effective information warning is provided for the conventional patrol method.
Optionally, the inputting the area image including the target object acquired in the patrol process into the trained detection model for processing, and determining the first target image existing in the area image includes:
acquiring a regional image containing a target object;
inputting the area image into the trained detection model for processing to obtain a second target image and corresponding multiple target attribute information;
and if the target attribute information meets a preset output condition, outputting the second target image as a first target image.
Optionally, the target attribute information includes target attributes and trust values corresponding to each target attribute;
if the plurality of target attribute information meets a preset output condition, outputting the second target image as a first target image, including:
and if the trust value corresponding to each target attribute is greater than the preset trust value of the corresponding target attribute, outputting the second target image as the first target image.
Optionally, the matching the first target image with a blacklist image in a blacklist image library includes:
extracting a feature code of the first target image;
matching the feature code of the first target image with the feature code of each blacklist image in the blacklist image library, and comparing the similarity value obtained in each matching with a preset similarity threshold value;
and if the blacklist image with the similarity threshold value larger than or equal to the preset similarity value exists, determining that the matching is successful, and confirming that the corresponding blacklist image exists in the blacklist image library in the first target image.
Optionally, the target object is a human face, and the detection model is a human face detection model;
the method for acquiring the regional image containing the target object in the patrol process, inputting the regional image into the trained detection model for processing, and determining the first target image existing in the regional image comprises the following steps:
and inputting the regional image containing the face acquired in the patrol process into a trained face detection model for processing, determining the face image existing in the regional image, and identifying the face image as a first target image.
Optionally, the target object is a license plate, and the detection model is a license plate detection model;
the method for acquiring the regional image containing the target object in the patrol process, inputting the regional image into the trained detection model for processing, and determining the first target image existing in the regional image comprises the following steps:
the method comprises the steps of collecting a regional image containing a license plate in the patrol process, inputting the regional image into a trained license plate detection model for processing, determining the license plate image existing in the regional image, and identifying the license plate image as a first target image.
Optionally, a panoramic image including the target object is acquired in the patrol process.
In a second aspect, an embodiment of the present application provides a patrol alarm system, including:
the determination module is used for inputting the area image with the target object acquired in the patrol process into the trained detection model for processing, and determining a first target image existing in the area image;
the matching module is used for matching the first target image with a blacklist image in a blacklist image library;
and the alarm module is used for generating alarm information and sending the alarm information to an alarm terminal if the matching is successful, wherein the alarm information is used for indicating the alarm terminal to send an alarm.
In a third aspect, an embodiment of the present application provides a terminal device, including: the patrol warning method comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the processor realizes the patrol warning method when executing the computer program.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the method for patrolling and alarming is implemented.
In a fifth aspect, an embodiment of the present application provides a computer program product, which, when running on a terminal device, causes the terminal device to execute the patrol alarm method according to any one of the first aspect.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a patrol alarm method according to an embodiment of the present application;
fig. 2 is a detailed flowchart of step S101 in the patrol warning method provided in the embodiment of the present application;
fig. 3 is a schematic specific flowchart of step S102 in the patrol warning method according to the embodiment of the present application;
fig. 4 is a schematic structural diagram of a patrol alarm system according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Referring to fig. 1, fig. 1 is a flowchart illustrating an implementation of a patrol alarm method according to an embodiment of the present application. In this embodiment, the patrol warning method is used for finding a target object corresponding to a blacklist image in a patrol vehicle patrol process, and an execution subject of the patrol warning method is a terminal device, and the terminal device is configured with a camera component.
The patrol warning method shown in fig. 1 comprises the following steps:
s101: and inputting the regional image containing the target object acquired in the patrol process into the trained detection model for processing, and determining a first target image existing in the regional image.
In step S101, the area image is an image in which a target object acquired by the terminal device occupies a certain proportional area during the patrol process. For example, the target object occupies 80% of the area of the region image.
In the present embodiment, the region image includes only the target object. For example, the region image is only a face image, or the region image is only a license plate image. Alternatively, the region image includes only one target object. For example, the region image is a face of a pedestrian. Still alternatively, the region image includes a plurality of target objects. For example, the region images are faces of a plurality of pedestrians.
In practical application, the terminal equipment can be configured on a patrol vehicle, and when the patrol vehicle is used for carrying out patrol work, the terminal equipment collects images on a patrol route in real time so as to obtain an area image.
In some embodiments, when the terminal device receives the patrol instruction, the camera assembly connected with the terminal device in response to the patrol instruction collects an area image containing the target object in the patrol process.
In some embodiments, the input is processed into a trained detection model, a first target image present in the region image is determined, and a category of the first target image is also determined.
The first target image is an image corresponding to the target object. For example, if the target object is a human face, the first target image is an image of the human face, and there is no obvious difference between the two human faces.
In this embodiment, a detection model trained in advance is stored in the terminal device in advance. The detection model is obtained by training an initial detection model based on a sample training set by using a machine learning algorithm. The initial detection model is a deep learning network model which processes the area images to confirm whether the first target image exists or not under the condition of taking a plurality of images as reference.
It can be understood that the detection model may be trained in advance by the terminal device, or a file corresponding to the detection model may be transplanted to the terminal device after being trained in advance by another device. That is, the subject of execution for training the detection model may be the same as or different from the subject of execution for performing image processing using the detection model. For example, when the initial detection model is trained by other devices, after the initial detection model is trained by other devices, the model parameters of the initial detection model are fixed to obtain a file corresponding to the detection model. And then the file is transplanted to the terminal device.
It should be noted that, because the area image including the target object is acquired in the patrol process, a camera assembly for acquiring the image needs to be configured in advance on the patrol tool, so that the area image including the target object is acquired in the patrol process through the camera assembly. Meanwhile, the camera shooting assembly is connected with the terminal equipment in a wired or wireless mode, and the acquired regional image is transmitted back to the terminal equipment, so that the regional image can be conveniently processed by a detection model in the terminal equipment.
In this embodiment, since the area image including the target object is acquired in the patrol process, the area image acquired each time is necessarily a better area image due to the influence of the patrol speed, the movement path, and the direction. For example, the acquired region image is an extremely blurred image and cannot be directly used, so that the region image needs to be input into a trained detection model for processing to determine whether a first target image meeting requirements exists in the region image. It should be understood that in all embodiments of the present application, the region image containing the target object is input into the trained detection model for processing, in fact, to confirm whether the region image is a required captured image.
As to when the region image containing the target object is acquired, the following two scenes may be included, but not limited to.
Scene 1: and if the beginning of the patrol work is detected, acquiring a regional image containing the target object in the patrol process in real time, inputting the regional image into the trained detection model for processing, and determining a first target image existing in the regional image.
For example, when the patrol work is detected to be started through a patrol car, a region image containing a target object in the patrol process is collected through a camera assembly arranged on the patrol car and input into a trained detection model for processing, and a first target image existing in the region image is determined.
Scene 2: and if the target road section is detected to be patrolled, acquiring a region image containing the target object in the patrolling process through a camera assembly arranged on the patrolling, inputting the region image into a trained detection model for processing, and determining a first target image existing in the region image.
For example, when a patrol car is detected to travel to a square of a railway station, a camera assembly arranged on the patrol car is used for collecting an area image containing a target object in the patrol process, and the area image is input into a trained detection model for processing, so that a first target image existing in the area image is determined.
S102: and matching the first target image with the blacklist image in the blacklist image library.
In step S102, the blacklisted image library includes a plurality of images of the blacklisted target object. For example, the blacklist image library contains a plurality of face images of criminals or a plurality of license plate images of license plates of violation vehicles.
For example, when it is determined that a first target image exists in the region image, in order to determine whether a target object corresponding to the first target image is blacklisted, the first target image is matched with a blacklist image in a blacklist image library. For example, there are 5 blacklisted images a1, B1, C1, D1, E1, F1 existing in the first target image a and the first target image a is matched with the 5 blacklisted images a1, B1, C1, D1, E1, F1 one by one so as to determine whether the first target image a is blacklisted according to the matching result.
In some embodiments, the first target image is matched one-to-one with a plurality of blacklist images in a blacklist image library.
In some embodiments, the first target image is matched against multiple blacklist images in the blacklist image library simultaneously.
In one embodiment, to improve the matching efficiency, the plurality of blacklist images in the blacklist image library are classified according to different categories, for example, the category of the blacklist image includes a face and a license plate, and the plurality of blacklist images are classified according to the category.
And when a first target image is obtained, matching the first target image with the blacklist image in the blacklist image library according to the category of the first target image.
S103: and if the matching is successful, generating alarm information and sending the alarm information to an alarm terminal, wherein the alarm information is used for indicating the alarm terminal to send an alarm.
In step S103, the warning information is used to describe that the first target image has been blacklisted.
The alarm is used for prompting that target objects listed in a blacklist exist around the patrol route. For example, a patrol police car generates an audible alarm or a light alarm when receiving the alarm information.
In the embodiment, in order to facilitate the patrolling personnel to quickly handle the blacklisted target object in time, when the first target image is successfully matched with the blacklist image, the warning information is immediately generated and sent to the warning terminal.
In some embodiments, the alarm terminal can be configured on the patrol car in advance, so that the police can be guided to dispose quickly when the alarm information is received.
In some embodiments, the alert information includes one or more of the results of the matching with the blacklist image library and information on an image, distance, orientation, etc. of the target object.
In some embodiments, the alert information also includes a category of the target object. For example, if the category of the target object is a license plate, it indicates that there is a vehicle violating or illegal around the warning terminal.
According to the patrol warning method provided by the application, firstly, the regional image containing the target object is collected in the patrol process and input to the trained detection model for processing, the first target image existing in the regional image is determined, the first target image can be used for describing the actual target object existing in the patrol process, the first target image is matched with the blacklist image in the blacklist image library, and if the matching is successful, warning information is generated and sent to the warning terminal, so that effective information warning is provided for the conventional patrol warning method.
For example, fig. 2 shows a schematic flowchart of a patrol warning method provided in another embodiment of the present application, which mainly relates to a process of processing an acquired region image based on a trained detection model to determine a first target image existing in the region image. By way of example and not limitation, the method may be applied to the terminal device described above. Referring to fig. 2, the method comprises the steps of:
s201: an area image containing a target object is acquired.
In step S201, the area image is used to describe what is encountered during patrol. For example, if a pedestrian exists in the advancing direction of the patrol, the face area of the pedestrian is collected through the camera component.
In some embodiments, when a preset event is detected during the patrol process, an area image containing the target object is acquired. For example, the predetermined event is to patrol to a predetermined section of road, or to start a patrol work.
In some embodiments, the area images including the same target object may be acquired by a plurality of camera modules, so that one area image including the target object is selected according to a selection strategy. Wherein the selection policy is a logical way of indicating the selection area image. For example, one of the region images in which the area occupied by the target object in the region image is the largest is selected as the acquired target region image.
S202: and inputting the area image into the trained detection model for processing to obtain a second target image and corresponding multiple target attribute information.
In step S202, the second target image is an image corresponding to the target object. For example, if the target object is a face, the second target image is the face, and there is no obvious difference between the two faces.
The object attribute information is image feature information for describing the second object image. For example, if the second target image is a license plate image, the target attribute information may include a license plate number, a license plate type, and a license plate color.
In an example, the acquired region image is a region image including a license plate, the region image is processed in a detection model with trained input values of the region image, the license plate existing in the region image is identified, and a plurality of preset target attribute information of the license plate is extracted, for example, the extracted license plate number is 123456, the type of the license plate is a small car number, and the color of the license plate is a blue-bottom white frame line.
S203: and if the plurality of target attribute information meet the preset output condition, outputting the second target image as the first target image.
In step S203, the output condition is preset as a logic policy for authenticating the second target image.
In this embodiment, in order to better confirm whether the acquired region image is a required image, a plurality of target attribute information of the second target image are determined by using a preset output condition, and whether the plurality of target attribute information meet a preset requirement is confirmed.
For example, the preset output conditions are that the number plate is a number, the type of the number plate is a small-sized automobile number plate, and the color of the number plate is a white frame line with a blue background and a white character; the second target image is a license plate, which contains 3 target attribute information, wherein the license plate number of the license plate is 123456, the license plate type is a small-sized automobile license plate, and the color of the license plate is white frame lines with yellow background and white characters. And if the license plate color of the second target image does not meet the preset output condition, the second target image cannot be output as the first target image and needs to be discarded, and a new second target image is obtained again.
In an embodiment of the present application, in a possible implementation manner, the target attribute information includes target attributes and trust values corresponding to each target attribute.
If the plurality of target attribute information meets the preset output condition, outputting the second target image as the first target image, including:
and if the trust value corresponding to each target attribute is greater than the preset trust value of the corresponding target attribute, outputting the second target image as the first target image.
In this embodiment, the confidence value is a confidence parameter value describing a target attribute of the second target image.
The preset trust value is a trusted minimum parameter value describing the target attribute of the second target image.
In an example, after the detection model processing, the second target image is a license plate and 3 corresponding target attribute information, the 3 target attribute information is a license plate number of the license plate of 123456, the trust value is 80, the license plate type is a car number plate, the trust value is 90, the license plate color is a white frame line with yellow background and white characters, and the trust value is 50. The preset trust value of the license plate number is 80, the preset trust value of the license plate type is 85, and the preset trust value of the license plate color is 88. Because the trust value of the license plate color of the second target image is smaller than the preset trust value of the license plate color, the second target image is possibly a fake license plate and is not output as the first target image if the second target image is a fake license plate.
It should be understood that, in order to better determine whether the second target image can be used as the first target image, the separation between the plurality of target attributes is compared with the preset trust values of the respective corresponding target attributes, and the corresponding preset trust values are different because the plurality of target attributes are different attributes.
In an embodiment of the present application, if the plurality of target attribute information satisfy the preset output condition, outputting the second target image as the first target image, including:
and if the sum of the trust values corresponding to each target attribute is greater than the sum of the preset trust values of the corresponding target attributes, outputting the second target image as the first target image.
In an embodiment of the present application, when it is determined that an image acquired in an patrolling process is processed by a detection model to obtain a first target image, it is further required to determine whether a target object corresponding to the first target image should be listed in a blacklist.
Therefore, the embodiment of the application provides a patrol warning method, which mainly relates to a process of matching a first target image with a blacklist image to determine whether a target object corresponding to the first target image should be listed in a blacklist. Referring to fig. 3, the method comprises the steps of:
s301: and extracting the feature code of the first target image.
In step S301, the feature code is an information value describing that the first target image is distinguished from other images. Therefore, in order to facilitate matching with the blacklist image, before the first target image is matched with the blacklist image, the feature code for describing the first target image is extracted.
In some embodiments, the feature code is a header identification of the image. For example, the file header of the picture stores the related information of the picture, and the corresponding field can be read from the file header to obtain information such as size, format and the like.
In some embodiments, the feature code is an end-of-file identification for the image.
In some embodiments, the feature code is a hash value calculated from the image content. And if the images are not consistent, the hash values obtained by calculation are different.
It should be noted that, after the first target image is acquired, the features of the first target image can be extracted based on an image feature code extraction algorithm in the prior art.
S302: and matching the feature code of the first target image with the feature code of each blacklist image in the blacklist image library, and comparing the similarity value obtained in each matching with a preset similarity threshold value.
In step S302, the similarity value is a ratio of the first target image and the blacklist image. For example, if the feature code of the first target image is 12345 and the feature code of the blacklist image is 12346, and the feature code of the first target image is matched with the feature code of the blacklist image, the obtained similarity value is 80%.
The preset similarity threshold is a minimum similarity value indicating that the target object corresponding to the first target image is blacklisted. For example, if the preset similarity threshold is 70%, and the feature code of the first target image is matched with the feature code of the blacklist image, the obtained similarity value is 60%, and if 60% is smaller than 70%, it indicates that the target object corresponding to the first target image is not listed in the blacklist and there is no corresponding blacklist image in the blacklist library.
S303: and if the blacklist image with the similarity threshold value larger than or equal to the preset similarity value exists, determining that the matching is successful, and confirming that the corresponding blacklist image exists in the blacklist image library in the first target image.
In this embodiment, the similarity threshold is greater than or equal to the preset similarity value, which indicates that the target object corresponding to the first target image and the target object corresponding to the blacklist image are the same object.
For example, the first target image is a face image, the blacklist image is also a face image, when the first target image is matched with the blacklist image, the determined similarity is 90%, and the preset similarity value is 85%, because 90% is greater than 85%, it indicates that the face corresponding to the first target image and the face corresponding to the blacklist image are the same face, and the citizen corresponding to the face is listed in the blacklist.
It should be understood that, in practical applications, since the blacklist images in the blacklist image library are included in advance, the feature codes of the blacklist images are synchronously extracted to obtain corresponding feature codes when being included, and the feature codes are stored in association with the corresponding blacklist images.
In one embodiment, before the feature code of the first target image is matched with the feature code of each blacklist image in the blacklist image library, the feature code of the blacklist image is extracted.
In an embodiment of the application, there may be a plurality of target objects acquired in the patrol process, and when the acquired target object is a human face, the detection model is a human face detection model.
Therefore, inputting the area image containing the target object acquired in the patrol process into the trained detection model for processing, and determining the first target image existing in the area image, including:
and inputting the regional image containing the face acquired in the patrol process into a trained face detection model for processing, determining the face image existing in the regional image, and identifying the face image as a first target image.
In this implementation, the face detection model processes the region image containing the face using a face detection algorithm.
The face detection model is obtained by training an initial face detection model by using a face sample training set.
In some embodiments, the face detection model is a convolutional neural network model, and the initial face convolutional neural network model is trained by using a face sample training set.
In an embodiment of the present application, inputting a region image including a face acquired in a patrol process into a trained face detection model for processing, and determining a face image existing in the region image, includes:
acquiring a regional image containing a human face;
inputting the region image into a trained detection model for processing to obtain a second face image and a plurality of corresponding target attribute information;
and if the target attribute information meets the preset output condition, outputting the second face image as the first face image.
In the present embodiment, the target attribute information is feature information describing a human face image. For example, gender, wearing a mask, wearing glasses, beard, age (teenagers, adolescents, middle-aged, elderly), etc.
In an embodiment of the application, there may be a plurality of target objects collected in the patrol process, and when the collected target object is a license plate, the detection model is a license plate detection model.
Therefore, inputting the area image with the target object acquired in the patrol process into the trained detection model for processing, and determining the first target image existing in the area image, including:
and inputting the regional image containing the license plate acquired in the patrol process into a trained license plate detection model for processing, determining the license plate image existing in the regional image, and identifying the license plate image as a first target image.
In this embodiment, the license plate detection model processes the region image including the license plate detection by using a license plate detection algorithm.
The license plate detection model is obtained by training an initial license plate detection model by utilizing a license plate detection sample training set.
In an embodiment of the present application, inputting a region image including a target object acquired in a patrol process into a trained detection model for processing, and determining a first target image existing in the region image, includes:
acquiring a regional image containing a license plate;
inputting the area image into a trained license plate detection model for processing to obtain a second license plate image and corresponding multiple target attribute information;
and if the target attribute information meets the preset output condition, outputting the second license plate image as the first license plate image.
In this embodiment, the target attribute information is used to describe feature information given by the license plate image. Such as license plate number, license plate color, license plate type, etc.
In an embodiment of the application, in order to better confirm the actual spatial position of the target object indicated by the alarm, a panoramic image containing the target object is acquired in the patrol process.
In this embodiment, the panoramic image includes the target object and the real object in a certain area around the target object.
In some embodiments, a plurality of detection models, such as a face detection model and a license plate detection model, are configured in the terminal device.
And inputting the area image containing the target object acquired in the patrol process into a plurality of detection models for processing, and determining a first target image in the area image.
It can be understood that, since the plurality of detection models are detection models trained by sample training sets of different types of objects, even if the acquired region image including the target object is input to the plurality of detection models for processing, only one detection model can determine the first target image existing in the region image.
In some embodiments, the terminal device is configured with multiple detection models, and before inputting an area image containing a target object acquired in a patrol process into a trained detection model for processing, the type of the target object in the area image is extracted in advance, and the target detection model is determined according to the type information.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Corresponding to the method in the foregoing embodiment, fig. 4 shows a structural block diagram of the patrol warning system provided in the embodiment of the present application, and for convenience of explanation, only the part related to the embodiment of the present application is shown.
Referring to fig. 4, the system includes:
the determination module 100 is configured to input a region image with a target object acquired in a patrol process into a trained detection model for processing, and determine a first target image existing in the region image;
a matching module 200, configured to match the first target image with a blacklist image in a blacklist image library;
and the alarm module 300 is configured to generate alarm information and send the alarm information to an alarm terminal if the matching is successful, where the alarm information is used to instruct the alarm terminal to send an alarm.
Optionally, the system further includes an obtaining module.
And the acquisition module is used for acquiring a regional image containing the target object.
The determining module 100 is further configured to input the region image to the trained detection model for processing, so as to obtain a second target image and a plurality of corresponding target attribute information; and if the plurality of target attribute information meet the preset output condition, outputting the second target image as the first target image.
Optionally, the target attribute information includes target attributes and trust values corresponding to each target attribute.
The determining module 100 is further configured to output the second target image as the first target image if the trust value corresponding to each target attribute is greater than the preset trust value of the corresponding target attribute.
Optionally, the system further comprises an extraction module.
The extraction module is used for extracting the feature codes of the first target image;
the matching module 200 is further configured to match the feature code of the first target image with the feature code of each blacklist image in the blacklist image library, and compare a similarity value obtained in each matching with a preset similarity threshold; and if the blacklist image with the similarity threshold value larger than or equal to the preset similarity value exists, determining that the matching is successful, and confirming that the corresponding blacklist image exists in the blacklist image library in the first target image.
Optionally, the target object is a human face, and the detection model is a human face detection model.
The determining module 100 is further configured to input the region image including the face acquired in the patrol process into the trained face detection model for processing, determine a face image existing in the region image, and identify the face image as the first target image.
Optionally, the target object is a license plate, and the detection model is a license plate detection model.
The determining module 100 is further configured to input the regional image including the license plate acquired in the patrol process into the trained license plate detection model for processing, determine the license plate image existing in the regional image, and recognize the license plate image as the first target image.
Optionally, the obtaining module is further configured to further collect and obtain a panoramic image including the target object in the patrol process.
In one embodiment, the patrol warning system comprises a camera shooting assembly, a server and a terminal device, wherein the camera shooting assembly is connected with the camera shooting assembly, the server is connected with the terminal device, and the camera shooting assembly is used for collecting an area image containing a target object; the terminal equipment is used for confirming whether a first target image exists in the area image or not based on the trained detection model; the server is used for matching the first target image with the blacklist image in the blacklist image library, and when the matching is successful, generating alarm information and sending the alarm information to the alarm terminal, wherein the alarm information is used for indicating the alarm terminal to send an alarm.
Fig. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 5, the terminal device 5 of this embodiment includes: at least one processor 50 (only one processor is shown in fig. 5), a memory 51, and a computer program 52 stored in the memory 51 and operable on the at least one processor 50, wherein the processor 50 implements the steps in any of the various patrol alarm method embodiments described above when executing the computer program 52.
The terminal device 5 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 50, a memory 51. Those skilled in the art will appreciate that fig. 5 is only an example of the terminal device 5, and does not constitute a limitation to the terminal device 5, and may include more or less components than those shown, or combine some components, or different components, such as an input-output device, a network access device, and the like.
The Processor 50 may be a Central Processing Unit (CPU), and the Processor 50 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field-Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may in some embodiments be an internal storage unit of the terminal device 5, such as a hard disk or a memory of the terminal device 5. The memory 51 may also be an external storage device of the terminal device 5 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 5. Further, the memory 51 may also include both an internal storage unit and an external storage device of the terminal device 5. The memory 51 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 51 may also be used to temporarily store data that has been output or is to be output.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
An embodiment of the present application further provides a terminal device, where the terminal device includes: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, the processor implementing the steps of any of the various method embodiments described above when executing the computer program.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a terminal device, enables the terminal device to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A patrol alarm method is characterized by comprising the following steps:
inputting a regional image containing a target object acquired in a patrol process into a trained detection model for processing, and determining a first target image existing in the regional image;
matching the first target image with a blacklist image in a blacklist image library;
and if the matching is successful, generating alarm information and sending the alarm information to an alarm terminal, wherein the alarm information is used for indicating the alarm terminal to send an alarm.
2. The patrol alarm method according to claim 1, wherein the step of inputting the area image collected in the patrol process and containing the target object into the trained detection model for processing, and determining the first target image existing in the area image comprises:
acquiring a regional image containing a target object;
inputting the area image into the trained detection model for processing to obtain a second target image and corresponding multiple target attribute information;
and if the target attribute information meets a preset output condition, outputting the second target image as a first target image.
3. The patrol alarm method according to claim 2, wherein the target attribute information includes target attributes and trust values corresponding to each target attribute;
if the plurality of target attribute information meets a preset output condition, outputting the second target image as a first target image, including:
and if the trust value corresponding to each target attribute is greater than the preset trust value of the corresponding target attribute, outputting the second target image as the first target image.
4. The patrol warning method of claim 1, wherein the matching the first target image with a blacklist image in a blacklist image library comprises:
extracting a feature code of the first target image;
matching the feature code of the first target image with the feature code of each blacklist image in the blacklist image library, and comparing the similarity value obtained in each matching with a preset similarity threshold value;
and if the blacklist image with the similarity threshold value larger than or equal to the preset similarity value exists, determining that the matching is successful, and confirming that the corresponding blacklist image exists in the blacklist image library in the first target image.
5. The patrol alarm method according to claim 1, wherein the target object is a human face, and the detection model is a human face detection model;
inputting a region image containing a target object acquired in a patrol process into a trained detection model for processing, and determining a first target image existing in the region image, wherein the method comprises the following steps:
inputting a regional image containing a human face acquired in a patrol process into a trained human face detection model for processing, determining a human face image existing in the regional image, and identifying the human face image as a first target image.
6. The patrol warning method according to claim 1, wherein the target object is a license plate, and the detection model is a license plate detection model;
inputting a region image containing a target object acquired in a patrol process into a trained detection model for processing, and determining a first target image existing in the region image, wherein the method comprises the following steps:
the method comprises the steps of collecting a regional image containing a license plate in the patrol process, inputting the regional image into a trained license plate detection model for processing, determining the license plate image existing in the regional image, and identifying the license plate image as a first target image.
7. The patrol warning method according to claim 1, wherein a panoramic image containing the target object is acquired during the patrol process.
8. An patrol alarm system, comprising:
the determination module is used for inputting the acquired regional image containing the target object in the patrol process into the trained detection model for processing, and determining a first target image existing in the regional image;
the matching module is used for matching the first target image with a blacklist image in a blacklist image library;
and the alarm module is used for generating alarm information and sending the alarm information to an alarm terminal if the matching is successful, wherein the alarm information is used for indicating the alarm terminal to send an alarm.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the method of any one of claims 1 to 7.
CN202010630103.1A 2020-07-03 2020-07-03 Patrol alarm method, system, terminal equipment and storage medium Pending CN111985304A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010630103.1A CN111985304A (en) 2020-07-03 2020-07-03 Patrol alarm method, system, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010630103.1A CN111985304A (en) 2020-07-03 2020-07-03 Patrol alarm method, system, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111985304A true CN111985304A (en) 2020-11-24

Family

ID=73439367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010630103.1A Pending CN111985304A (en) 2020-07-03 2020-07-03 Patrol alarm method, system, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111985304A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580509A (en) * 2020-12-18 2021-03-30 中国民用航空总局第二研究所 Logical reasoning type road surface detection method and system
CN114155483A (en) * 2021-11-11 2022-03-08 鸿富锦精密电子(郑州)有限公司 Monitoring alarm method, device, storage medium and computer equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103530366A (en) * 2013-10-12 2014-01-22 湖北微模式科技发展有限公司 Vehicle searching method and system based on user-defined features
CN109993946A (en) * 2017-12-29 2019-07-09 国民技术股份有限公司 A kind of monitoring alarm method, camera, terminal, server and system
CN110378189A (en) * 2019-04-22 2019-10-25 北京旷视科技有限公司 A kind of monitoring method for arranging, device, terminal and storage medium
CN110795587A (en) * 2018-08-01 2020-02-14 深圳云天励飞技术有限公司 Medical alarm early warning method, service platform and computer readable storage medium
CN111126224A (en) * 2019-12-17 2020-05-08 成都通甲优博科技有限责任公司 Vehicle detection method and classification recognition model training method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103530366A (en) * 2013-10-12 2014-01-22 湖北微模式科技发展有限公司 Vehicle searching method and system based on user-defined features
CN109993946A (en) * 2017-12-29 2019-07-09 国民技术股份有限公司 A kind of monitoring alarm method, camera, terminal, server and system
CN110795587A (en) * 2018-08-01 2020-02-14 深圳云天励飞技术有限公司 Medical alarm early warning method, service platform and computer readable storage medium
CN110378189A (en) * 2019-04-22 2019-10-25 北京旷视科技有限公司 A kind of monitoring method for arranging, device, terminal and storage medium
CN111126224A (en) * 2019-12-17 2020-05-08 成都通甲优博科技有限责任公司 Vehicle detection method and classification recognition model training method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580509A (en) * 2020-12-18 2021-03-30 中国民用航空总局第二研究所 Logical reasoning type road surface detection method and system
CN112580509B (en) * 2020-12-18 2022-04-15 中国民用航空总局第二研究所 Logical reasoning type road surface detection method and system
CN114155483A (en) * 2021-11-11 2022-03-08 鸿富锦精密电子(郑州)有限公司 Monitoring alarm method, device, storage medium and computer equipment

Similar Documents

Publication Publication Date Title
US10762338B2 (en) Method and apparatus for detecting fake license plates of vehicles, readable storage medium, and electronic device
CN110390262B (en) Video analysis method, device, server and storage medium
CN112085952A (en) Vehicle data monitoring method and device, computer equipment and storage medium
CN110580808B (en) Information processing method and device, electronic equipment and intelligent traffic system
CN110909699A (en) Video vehicle non-guide driving detection method and device and readable storage medium
CN108932849B (en) Method and device for recording low-speed running illegal behaviors of multiple motor vehicles
CN111985356A (en) Evidence generation method and device for traffic violation, electronic equipment and storage medium
CN111369801B (en) Vehicle identification method, device, equipment and storage medium
CN113496213B (en) Method, device, system and storage medium for determining target perception data
CN111292530A (en) Method, device, server and storage medium for processing violation pictures
CN111985304A (en) Patrol alarm method, system, terminal equipment and storage medium
US20180189588A1 (en) Device for reading vehicle license plate number and method therefor
CN114170585B (en) Dangerous driving behavior recognition method and device, electronic equipment and storage medium
CN111368617B (en) Vehicle access data processing method and device
CN114724122A (en) Target tracking method and device, electronic equipment and storage medium
CN106530727A (en) Traffic violation information processing system
CN111768630A (en) Violation waste image detection method and device and electronic equipment
CN112597924B (en) Electric bicycle track tracking method, camera device and server
CN115019242A (en) Abnormal event detection method and device for traffic scene and processing equipment
Kiac et al. ADEROS: artificial intelligence-based detection system of critical events for road security
CN114202919A (en) Method, device and system for identifying shielding of electronic license plate of non-motor vehicle
CN112839064A (en) Intelligent control method and device for vehicle
Pai et al. Realization of Internet of vehicles technology integrated into an augmented reality system
CN115631477B (en) Target identification method and terminal
CN113129597B (en) Method and device for identifying illegal vehicles on motor vehicle lane

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20221226

Address after: 518000 Yingfei Haocheng Science Park, Guansheng 5th Road, Luhu Community, Guanhu Street, Longhua District, Shenzhen, Guangdong 1515

Applicant after: Shenzhen Infineon Information Co.,Ltd.

Address before: 518000 Room 301, Infineon Technology Co., Ltd., No. 12, Guanbao Road, Luhu community, Guanhu street, Longhua District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN INFINOVA INTELLIGENT TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right