CN110766077A - Method, device and equipment for screening sketch in evidence chain image - Google Patents

Method, device and equipment for screening sketch in evidence chain image Download PDF

Info

Publication number
CN110766077A
CN110766077A CN201911017067.5A CN201911017067A CN110766077A CN 110766077 A CN110766077 A CN 110766077A CN 201911017067 A CN201911017067 A CN 201911017067A CN 110766077 A CN110766077 A CN 110766077A
Authority
CN
China
Prior art keywords
image
training
images
evidence chain
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911017067.5A
Other languages
Chinese (zh)
Inventor
周康明
徐正浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kos Technology Shanghai Co Ltd
Shanghai Eye Control Technology Co Ltd
Original Assignee
Kos Technology Shanghai Co Ltd
Shanghai Eye Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kos Technology Shanghai Co Ltd, Shanghai Eye Control Technology Co Ltd filed Critical Kos Technology Shanghai Co Ltd
Priority to CN201911017067.5A priority Critical patent/CN110766077A/en
Publication of CN110766077A publication Critical patent/CN110766077A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a method, a device and equipment for screening a sketch in an evidence chain image. The method comprises the following steps: acquiring an evidence chain image; and processing the evidence chain image by adopting a preset identification model based on the difference between any two single images, and screening the evidence chain image to obtain the sketch map. By adopting the method, the robustness of close-up image screening can be improved.

Description

Method, device and equipment for screening sketch in evidence chain image
Technical Field
The application relates to the technical field of image processing, in particular to a method, a device and equipment for screening a sketch map in an evidence chain image.
Background
With the development of science and technology, machines can be used for identifying illegal evidence instead of human beings in more and more scenes in the process of traffic law enforcement. When adopting automatic supervisory equipment, for example the camera is taken a candid photograph of evidence of violation picture, can gather the panorama of target vehicle violation and the close-up picture of target vehicle generally, the panorama is used for forming the evidence chain of the act of violation, and the close-up picture is that the region with the target vehicle carries out local amplification, is convenient for the auditor to audit.
Different automated surveillance devices may have close-ups located at different locations when synthesizing the panoramic view and the close-ups into a chain of evidence images. When the intelligent illegal auditing system judges whether the evidence chain illegal behaviors are established or not, the special-writing graph needs to be distinguished firstly, and then the information of the remaining several panoramic graphs is continuously utilized to judge whether the vehicles are illegal or not, so that the accurate screening of the special-writing graph from the evidence chain image is the implementation basis of the intelligent illegal auditing system. The current practice is to record the position of the close-up image in the evidence chain image formed by each camera by establishing a table, and during actual use, the table is looked up every time an evidence chain image is received, so that the position of the close-up image in the evidence chain image synthesized by the current camera is obtained, and the close-up image is distinguished.
However, the conventional method for determining the position of the sketch by using the table lookup method needs to rely on the premise that the splicing method shot by a single camera is fixed and unchangeable, so that the robustness is poor.
Disclosure of Invention
In view of the above, it is necessary to provide a method, an apparatus, a device and a storage medium for screening a sketch map in an evidence chain image, which can improve robustness.
In a first aspect, an embodiment of the present application provides a method for screening a sketch map in an evidence chain image, where the method includes:
acquiring an evidence chain image; wherein the evidence chain image is composed of a plurality of single images, and the plurality of single images comprise a close-up image and at least two panoramic images;
and processing the evidence chain image by adopting a preset identification model based on the difference between any two single images, and screening the evidence chain image to obtain the sketch map.
In one embodiment, the recognition model is a neural network model obtained by training with a plurality of training positive samples and a plurality of training negative samples, the training positive samples comprise different panoramas in the evidence chain training images, and the training negative samples comprise training panoramas and training close-ups in the evidence chain training images.
In one embodiment, the processing, by using a preset recognition model, the evidence chain image based on the difference between any two single images, and the screening of the evidence chain image to obtain the sketch map includes:
disassembling the evidence chain image to obtain a plurality of single images;
extracting an image vector of each single image;
performing vector distance acquisition operation on any two image vectors to obtain a plurality of vector distances; wherein the vector distance obtaining operation is an operation of determining a degree of difference between images based on a distance between image vectors;
and screening the plurality of single images according to the plurality of vector distances to obtain the sketch map.
In one embodiment, the vector distance obtaining operation comprises:
multiplying a first image vector of the first image and a second image vector of the second image to obtain a first product;
multiplying the determinant of the first image vector and the determinant of the second image vector to obtain a second product;
and comparing the first product with the second product to obtain a vector distance.
In one embodiment, the filtering the feature map from the single images according to the vector distances includes:
comparing each vector distance with a preset distance threshold, and obtaining a matching result corresponding to each vector distance according to the comparison result; wherein the matching results comprise similarity and dissimilarity;
and according to the matching result corresponding to each vector distance, taking a single image which is not similar to other single images as the close-up image.
In one embodiment, the taking a single image which is not similar to other single images as the close-up image according to the matching result corresponding to each vector distance includes:
determining a single image corresponding to the similar vector distance as the matching result as the panoramic image;
and screening the panoramic image from the plurality of single images in the evidence chain image to obtain the close-up image.
In one embodiment, the taking a single image which is not similar to other single images as the close-up image according to the matching result corresponding to each vector distance includes:
taking two single images corresponding to the vector distances with the matching results being dissimilar as a single image pair;
and taking a single image which is commonly existed in a plurality of single image pairs as the close-up image.
In one embodiment, the training process of the recognition model includes:
acquiring a plurality of evidence chain training images; wherein each evidence chain training image comprises one training close-up image and at least two training panoramic images;
generating a plurality of training positive samples and a plurality of training negative samples according to the plurality of evidence chain training images; the training positive sample comprises a training close-up image and a training panoramic image in the same evidence chain training image, and the training negative sample comprises two different training panoramic images in the same evidence chain training image;
inputting a plurality of training positive samples and a plurality of training negative samples into a preset initial recognition model, and training the initial recognition model by adopting a preset loss function to obtain the recognition model; wherein the initial recognition model is a neural network model.
In a second aspect, an embodiment of the present application provides an apparatus for screening a sketch map in an evidence chain image, where the apparatus includes:
in a third aspect, an embodiment of the present application provides a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the following steps when executing the computer program:
acquiring an evidence chain image; wherein the evidence chain image is composed of a plurality of single images, and the plurality of single images comprise a close-up image and at least two panoramic images;
and processing the evidence chain image by adopting a preset identification model based on the difference between any two single images, and screening the evidence chain image to obtain the sketch map.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the following steps:
acquiring an evidence chain image; wherein the evidence chain image is composed of a plurality of single images, and the plurality of single images comprise a close-up image and at least two panoramic images;
and processing the evidence chain image by adopting a preset identification model based on the difference between any two single images, and screening the evidence chain image to obtain the sketch map.
According to the method, the device, the equipment and the storage medium for screening the close-up images in the evidence chain images, the evidence chain images are obtained through the computer equipment, the preset identification model is adopted to process the evidence chain images based on the difference between any two single images, the close-up images are screened from the evidence chain images, the computer equipment can adopt the identification model to automatically screen the close-up images with the large difference with other single images from a plurality of single images of the evidence chain images, the problem of poor robustness caused by the traditional mode of determining the positions of the close-up images through close-up table lookup is avoided, the adaptive scene is wider and more flexible, and the robustness is greatly improved. Meanwhile, the method avoids the maintenance of the close-up map position table, so that the close-up map is more convenient to screen, and the screening efficiency is higher.
Drawings
FIG. 1 is a diagram illustrating an internal structure of a computer device according to an embodiment;
FIG. 2 is a flowchart illustrating a method for screening a sketch in an evidence chain image according to an embodiment;
FIG. 2a is a sample of an evidence chain image provided by one embodiment;
FIG. 2b is a sample of an evidence chain image provided by another embodiment;
FIG. 3 is a schematic flow chart of a method for screening a sketch in an evidence chain image according to another embodiment;
FIG. 3a is a diagram illustrating an example of an evidence chain image split into four single images;
FIG. 4 is a flowchart illustrating a method for screening a sketch in an evidence chain image according to yet another embodiment;
FIG. 5 is a flowchart illustrating a method for screening a sketch in an evidence chain image according to yet another embodiment;
FIG. 5a is a display diagram of a training positive sample provided by one embodiment;
FIG. 5b is a diagram showing a training negative example provided by one embodiment;
FIG. 5c is a network architecture diagram of an identification model provided in one embodiment;
FIG. 6 is a schematic structural diagram of a sketch screening device in an evidence chain image according to an embodiment;
fig. 7 is a schematic structural diagram of a sketch screening device in an evidence chain image according to another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The method for screening the sketch map in the evidence chain image provided by the embodiment of the application can be applied to the computer equipment shown in FIG. 1. The computer device comprises a processor, a memory, a network interface, a database, a display screen and an input device which are connected through a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing the recognition models in the following embodiments, and the specific description of the recognition models refers to the specific description in the following embodiments. The network interface of the computer device may be used to communicate with other devices outside over a network connection. Optionally, the computer device may be a server, a desktop, a personal digital assistant, other terminal devices such as a tablet computer, a mobile phone, and the like, or a cloud or a remote server, and the specific form of the computer device is not limited in the embodiment of the present application. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like. Of course, the input device and the display screen may not belong to a part of the computer device, and may be external devices of the computer device.
Those skilled in the art will appreciate that the architecture shown in fig. 1 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
It should be noted that the execution subject of the following method embodiments may be a device for screening a sketch in an evidence chain image, and the device may be implemented as part or all of the above computer device by software, hardware, or a combination of software and hardware. The following method embodiments are described by taking the execution subject as the computer device as an example.
Fig. 2 is a schematic flowchart of a method for screening a sketch in an evidence chain image according to an embodiment. The embodiment relates to a specific process for screening a sketch by using a recognition model trained based on the difference of a single image by a computer device. The method comprises the following steps:
s10, acquiring an evidence chain image; wherein the evidence chain image is composed of a plurality of single images, and the plurality of single images comprise a close-up image and at least two panoramic images.
Specifically, the computer device acquires the evidence chain image, which may be reading the evidence chain image stored in the memory, or receiving an evidence chain image sent by another shooting device, for example, a camera, and this embodiment is not limited thereto. It should be noted that the evidence chain images are illegal evidence pictures captured by electronic police, and each evidence chain image is generally composed of a plurality of single images, wherein each single image includes a close-up image and a plurality of panoramic views of the target vehicle. The panoramic image is used for forming a driving behavior evidence chain, and the close-up image is used for partially amplifying the area of the target vehicle, so that the audit of auditors is facilitated. For example, fig. 2a shows an evidence chain image, which is illustrated by taking a four-in-one picture as an example, wherein the top left, the top right, and the bottom left are three panoramas, and the bottom right is a close-up view. The position of the close-up in fig. 2a is merely an example, and the position of the close-up is generally not fixed when different shooting devices combine the panorama and the close-up into an evidence chain image. Optionally, the close-up view may also be located in the upper left position as in fig. 2 b.
And S20, processing the evidence chain image by adopting a preset identification model based on the difference between any two single images, and screening the evidence chain image to obtain the sketch map.
Specifically, the computer device inputs the evidence chain image into a preset identification model, and the identification model disassembles the evidence chain image to obtain a plurality of single images. Because the recognition model is a neural network model obtained by training a plurality of training positive samples and a plurality of training negative samples, each training positive sample comprises different panoramas in an evidence chain training image, and each training negative sample comprises a training panoramas and a training close-up image in the evidence chain training image, the computer equipment can adopt the recognition model, and screen out an image with a large difference degree with other single images as a close-up image based on the difference degree of the two single images, and the other single images are panoramas.
In this embodiment, the computer device obtains the evidence chain image, based on the difference between any two single images, the preset identification model is used to process the evidence chain image, and the close-up image is obtained by screening from the evidence chain image, so that the computer device can automatically screen the close-up image with the difference with that of other single images from a plurality of single images of the evidence chain image by using the identification model, thereby avoiding the problem of poor robustness caused by the traditional mode of determining the position of the close-up image by looking up the table. Meanwhile, the method avoids the maintenance of the close-up map position table, so that the close-up map is more convenient to screen, and the screening efficiency is higher.
Optionally, on the basis of the above embodiment, the recognition model is a neural network model trained by using a plurality of training positive samples and a plurality of training negative samples, the training positive samples include different panoramas in the evidence chain training image, and the training negative samples include a training panoramas and a training close-up in the evidence chain training image. It should be noted that, the evidence chain image is composed of a close-up image and a plurality of single images of at least two panoramic images, and the recognition model is a neural network model obtained by training with a plurality of training positive samples and a plurality of training negative samples, each training positive sample includes a different panoramic image in the evidence chain training image, and the training negative sample includes a training panoramic image and a training close-up image in the evidence chain training image. Therefore, the computer equipment can adopt the identification model, based on pairwise difference between single images, close-up images with large difference degree with other single images are automatically screened out from a plurality of single images of the evidence chain image, the problem of poor robustness caused by the traditional mode of determining the positions of the close-up images through table lookup is avoided, the method can automatically screen out the close-up images from the single images based on the difference degree of the single images in the evidence chain image, so that the evidence chain images generated by different camera equipment can be automatically identified, a table for establishing the positions of the close-up images in advance is not needed, the adaptive scene is wider and more flexible, and the robustness is greatly improved. Meanwhile, the method avoids the maintenance of the close-up map position table, so that the close-up map is more convenient to screen, and the screening efficiency is higher.
Optionally, on the basis of the foregoing embodiment, a possible implementation manner of the foregoing step S20 may be as shown in fig. 3, and includes:
and S21, disassembling the evidence chain image to obtain a plurality of single images.
Specifically, the computer equipment adopts an identification model to firstly disassemble the evidence chain image to obtain a plurality of single images. Optionally, the evidence chain image disassembled by the computer device may be obtained by disassembling by identifying an image edge of the evidence chain image, or by reversely disassembling the evidence chain image according to a corresponding splicing manner, which is not limited in this embodiment. It should be noted that the multiple single images obtained by the computer device by using the evidence chain image of the recognition model include a close-up image and at least two panoramic images. As shown in fig. 3a, four single images P1, P2, P3 and P4 are obtained after the decomposition of one evidence chain image.
And S22, extracting an image vector of each single image.
Specifically, the computer device extracts the features of each single image to obtain an image vector of each single image, and the image vector can represent the features of each single image. For example, the computer device may perform feature extraction on each single image according to 128 dimensions, so as to obtain a 128-dimensional feature vector, for example, the obtained feature vector F ═ {0.35,0.57, 1.03., 0.78} (128 dimensions in total).
S23, performing vector distance obtaining operation on any two image vectors to obtain a plurality of vector distances; wherein the vector distance acquiring operation is an operation of determining a degree of difference between images based on a distance between image vectors.
Specifically, the computer device performs a vector distance acquisition operation on any two image vectors, and since the vector distance acquisition operation is an operation of determining a degree of difference between images based on a distance between the image vectors, a vector distance representing the degree of difference between any two image vectors can be obtained.
Optionally, the specific implementation process of the vector distance obtaining operation in step S23 may include: multiplying a first image vector of the first image and a second image vector of the second image to obtain a first product; multiplying the determinant of the first image vector and the determinant of the second image vector to obtain a second product; and comparing the first product with the second product to obtain a vector distance. Specifically, when the computer device calculates the vector distance between two image vectors, a formula can be adoptedOr a variant of this formula. Wherein D is12Representing the vector distance, F, of the first image vector and the second image vector1Is a first image vector, F2The vector distance is calculated for the second image vector by adopting the formula, and the method is simple and easy to implement and has small calculated amount, so that the calculation efficiency is easy to improve, and the screening time is saved. The method for calculating the vector distance in the implementation mode can accurately and visually reflect the difference degree between two image vectors, so that the screening of the sketch map can be more accurate.
And S24, screening the plurality of single images according to the plurality of vector distances to obtain the close-up graph.
It should be noted that the larger the vector distance is, the larger the difference between the two images is, that is, the different is; conversely, the smaller the vector distance, the smaller, i.e., more similar, the difference between the two images. Specifically, the computer device takes the image with the large difference degree with other single images as a close-up image according to a plurality of vector distances representing the difference degree of any two single images, so that the close-up image is automatically screened from the plurality of single images.
In this embodiment, the computer device disassembles the evidence chain image by using the identification model to obtain a plurality of single images, and then extracts an image vector of each single image to perform a vector distance obtaining operation on any two image vectors to obtain a plurality of vector distances. Since the vector distance acquisition operation is an operation of determining the degree of difference between images based on the distance between image vectors, the computer device can automatically screen a close-up image which is greatly different from other single images from a plurality of single images based on the degree of difference of feature vectors between the single images, namely the vector distance. By adopting the method, the computer equipment can screen the close-up map which has a large difference with other single images from the plurality of single images based on the quantized difference degree, namely the vector distance, so that the screening of the close-up map is more accurate, and the judgment of the traffic behavior based on the close-up map is more accurate.
Optionally, one possible implementation manner of step S24 in the foregoing embodiment may be as shown in fig. 4, and includes:
s241, comparing each vector distance with a preset distance threshold value, and obtaining a matching result corresponding to each vector distance according to the comparison result; wherein the matching results comprise similarity and dissimilarity.
Specifically, the computer device compares the obtained distance of each vector with a preset distance threshold to obtain a comparison result. If the comparison result is that the vector distance is greater than the distance threshold, determining that the two vector distances are large, and the difference between the images represented by the two image vectors corresponding to the vector distance is large; if the comparison result is that the vector distance is less than or equal to the distance threshold, the two vector distances are determined to be small, and the vector distance corresponds to two vectorsThe image vector represents an image with little difference. For example, assume that the distance threshold T is 0.8, D12When the difference is 0.85 and is greater than the distance threshold value of 0.8, it can be determined that the difference between the first image represented by the first image vector and the second image represented by the second image vector is small and the approximation degree is high; if D is12When the distance is 0.4 and is smaller than the distance threshold value of 0.8, it can be determined that the difference between the first image represented by the first image vector and the second image represented by the second image vector is small and the approximation degree is low. The computer device determines that the matching results of the two single images with the vector distance greater than the distance threshold are similar, and determines that the matching results of the two single images with the vector distance less than or equal to the distance threshold are dissimilar.
And S242, according to the matching result corresponding to each vector distance, taking a single image which is not similar to other single images as the close-up image.
Specifically, the computer device counts and compares the similar or dissimilar matching results corresponding to each vector distance, and selects a single image which is dissimilar to other single images as a close-up image. Optionally, the computer device may further use a single image similar to any single image except the close-up image as the panorama, or use another single image except the close-up image as the panorama, which is not limited in this embodiment.
Optionally, one possible implementation manner of this step S242 may include: determining a single image corresponding to the similar vector distance as the matching result as the panoramic image; and screening the panoramic image from the plurality of single images in the evidence chain image to obtain the close-up image. Specifically, the computer device may use all single images corresponding to similar vector distances as panoramas, and then exclude all panoramas from the single images in the evidence chain image to obtain a close-up image. For example, after the evidence chain image is input into the recognition model, the feature vectors are disassembled and extracted to obtain F corresponding to four single images1、F2、F3And F4Four feature vectors, respectively calculating four imagesThe vector distance between vectors is D12、D13、D14、D23、D24And D34Then each vector distance is compared with a distance threshold T, if D13、D14And D34And if the difference is less than or equal to T, determining that the first single image, the third single image and the fourth single image are close to each other, have small disparity and are similar to each other, so that the first single image, the third single image and the fourth single image are panoramic images, and the rest second single image is close-up images. The method is easy to realize and high in accuracy when the close-up image is screened.
Optionally, another possible implementation manner of this step S242 may include: taking two single images corresponding to the vector distances with the matching results being dissimilar as a single image pair; and taking a single image which is commonly existed in a plurality of single image pairs as the close-up image. Specifically, the computer device counts the vector distances with the matching results being dissimilar, takes the two single images with the large vector distances as a single image pair, and takes a single image which commonly exists in the multiple single image pairs as a close-up image. E.g. computer device will D12、D13、D14、D23、D24And D34Each vector distance is compared with a distance threshold T, if D12、D23And D24If the distance between the second single image and the other single images is larger than T, the second single image is determined to be far away from the other single images, and the difference degree is large, so that the second single image is a close-up image, and the other first single image, the third single image and the fourth single image are panoramic images. The method is easy to realize and high in accuracy when the close-up image is screened.
In this embodiment, the computer device compares each vector distance with a preset distance threshold, and obtains a matching result corresponding to each vector distance according to the comparison result, and the computer device determines, as a close-up image, a single image that is not similar to other single images according to the matching result corresponding to each vector distance.
Alternatively, the recognition model may adopt a network model of Siamese for measuring similarity, and specifically may include using a neural network model, such as GoogleNetV2 or ResNet, as a base network, and then training the base network to obtain the recognition model. The recognition model is used for extracting global features of the pictures to obtain feature vectors, and the feature vectors are used for distinguishing similarity between the images.
Optionally, on the basis of the foregoing embodiments, the training process of the recognition model may also be as shown in fig. 5, and includes:
s31, acquiring a plurality of evidence chain training images; wherein each evidence chain training image comprises one training close-up image and at least two training panoramic images.
Specifically, the computer device reads a plurality of evidence training images on the memory or receives a plurality of evidence chain training images sent by other devices. It should be noted that each evidence chain training image includes a training close-up image and at least two training panoramic images.
S32, generating a plurality of training positive samples and a plurality of training negative samples according to the plurality of evidence chain training images; wherein the training positive sample comprises a training close-up image and a training panorama in the same evidence chain training image, and the training negative sample comprises two different training panoramas in the same evidence chain training image.
Specifically, the computer equipment disassembles each evidence chain training image to obtain a training close-up image and at least two training panoramas of each evidence chain training image, and then combines the training close-up image and the at least two training panoramas in pairs to obtain a training positive sample and a training negative sample of each evidence chain training image, so that a plurality of training positive samples and a plurality of training negative samples are obtained. Wherein each training positive sample comprises one training close-up view and one training panorama in the same evidence chain training image, as shown in fig. 5a, and each training negative sample comprises two different training panoramas in the same evidence chain training image, as shown in fig. 5 b.
S33, inputting the training positive samples and the training negative samples into a preset initial recognition model, and training the initial recognition model by adopting a preset loss function to obtain the recognition model; wherein the initial recognition model is a neural network model.
Specifically, the computer device inputs a plurality of training positive samples and a plurality of training negative samples into a preset initial recognition model for training, it should be noted that the initial recognition model is a neural network model, the initial recognition model inputs the output result of each training sample into a loss function for calculation, and corrects model parameters in the initial recognition model until the loss function is converged by the output result of each training sample, thereby completing model training and obtaining a trained recognition model. Alternatively, the loss function may be a contrast loss function
Figure BDA0002246031990000141
Y is a label indicating whether two single images are matched or not, y is 1 indicating that the two single images are similar or matched, y is 0 indicating that the two single images are not similar or matched, and a threshold value which is not set by margin can be adjusted as required. d is the vector distance between two individual images. The contrast loss function is adopted to train the recognition model, so that two images in the training positive sample are close to each other, and two images in the training negative sample are far away from each other. Alternatively, the structure of the recognition model may be as shown in fig. 5c, where two single training images in a training sample pair are respectively subjected to feature extraction through a left branch and a right branch to obtain a feature 1 and a feature 2, and then input to a loss function for calculation.
In this embodiment, the computer device obtains a plurality of evidence chain training images, generates a plurality of training positive samples and a plurality of training negative samples according to the plurality of evidence chain training images, inputs the plurality of training positive samples and the plurality of training negative samples into a preset initial recognition model, and trains the initial recognition model by using a preset loss function to obtain the recognition model. By adopting the method, the identification model can be obtained through the marking of the plurality of sample images and training, and then the computer equipment can utilize the identification model to automatically screen out the close-up image with large difference from other single images from the plurality of single images of the evidence chain image based on the pairwise difference between the single images, the problem of poor robustness caused by the traditional mode of determining the position of the close-up image through table lookup is not needed, the evidence chain images generated by different camera equipment can be automatically identified, the table of the position of the close-up image is not needed to be established in advance, so that the method is wider in adaptation scene and more flexible, and the robustness is greatly improved. Meanwhile, the method avoids the maintenance of the close-up map position table, so that the close-up map is more convenient to screen, and the screening efficiency is higher.
It should be understood that although the various steps in the flow charts of fig. 2-5 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-5 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 6, there is provided an apparatus for screening a sketch in an evidence chain image, including:
an obtaining module 100, configured to obtain an evidence chain image; wherein the evidence chain image is composed of a plurality of single images, and the plurality of single images comprise a close-up image and at least two panoramic images;
and the processing module 200 is configured to process the evidence chain image by using a preset identification model based on a difference between any two single images, and screen the evidence chain image to obtain the sketch map.
In one embodiment, the recognition model is a neural network model trained by using a plurality of training positive samples and a plurality of training negative samples, the training positive samples comprise different panoramas in the evidence chain training images, and the training negative samples comprise training panoramas and training close-ups in the evidence chain training images.
In an embodiment, the processing module 200 is specifically configured to disassemble the evidence chain image to obtain the multiple single images; extracting an image vector of each single image; and executing vector distance acquisition operation on any two image vectors to obtain a plurality of vector distances.
In one embodiment, the vector distance obtaining operation is an operation of determining a degree of difference between images based on a distance between image vectors; and screening the plurality of single images according to the plurality of vector distances to obtain the sketch map.
In one embodiment, the vector distance acquisition operation comprises: multiplying a first image vector of the first image and a second image vector of the second image to obtain a first product; multiplying the determinant of the first image vector and the determinant of the second image vector to obtain a second product; and comparing the first product with the second product to obtain a vector distance.
In an embodiment, the processing module 200 is specifically configured to compare each of the vector distances with a preset distance threshold, and obtain a matching result corresponding to each of the vector distances according to the comparison result; wherein the matching results comprise similarity and dissimilarity; and according to the matching result corresponding to each vector distance, taking a single image which is not similar to other single images as the close-up image.
In an embodiment, the processing module 200 is specifically configured to determine a single image corresponding to the similar vector distance as the matching result as the panorama; and screening the panoramic image from the plurality of single images in the evidence chain image to obtain the close-up image.
In an embodiment, the processing module 200 is specifically configured to take two single images corresponding to the vector distances, of which the matching results are dissimilar, as a single image pair; and taking a single image which is commonly existed in a plurality of single image pairs as the close-up image.
In one embodiment, the apparatus may also be as shown in fig. 7, and includes a training module 300 configured to obtain a plurality of evidence chain training images; generating a plurality of training positive samples and a plurality of training negative samples according to the plurality of evidence chain training images; inputting a plurality of training positive samples and a plurality of training negative samples into a preset initial recognition model, and training the initial recognition model by adopting a preset loss function to obtain the recognition model; wherein the initial recognition model is a neural network model, and each evidence chain training image comprises one training close-up image and at least two training panoramic images; the training positive sample comprises a training close-up image and a training panorama in the same evidence chain training image, and the training negative sample comprises two different training panoramas in the same evidence chain training image.
For specific definition of the close-up image screening device in the evidence chain image, reference may be made to the above definition of the close-up image screening method in the evidence chain image, and details are not described here. The modules in the apparatus for screening the sketch map in the evidence chain image can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring an evidence chain image; wherein the evidence chain image is composed of a plurality of single images, and the plurality of single images comprise a close-up image and at least two panoramic images;
and processing the evidence chain image by adopting a preset identification model based on the difference between any two single images, and screening the evidence chain image to obtain the sketch map.
In one embodiment, the recognition model is a neural network model trained by using a plurality of training positive samples and a plurality of training negative samples, the training positive samples comprise different panoramas in the evidence chain training images, and the training negative samples comprise training panoramas and training close-ups in the evidence chain training images.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
disassembling the evidence chain image to obtain a plurality of single images;
extracting an image vector of each single image;
performing vector distance acquisition operation on any two image vectors to obtain a plurality of vector distances; wherein the vector distance obtaining operation is an operation of determining a degree of difference between images based on a distance between image vectors;
and screening the plurality of single images according to the plurality of vector distances to obtain the sketch map.
In one embodiment, the vector distance acquisition operation comprises:
multiplying a first image vector of the first image and a second image vector of the second image to obtain a first product;
multiplying the determinant of the first image vector and the determinant of the second image vector to obtain a second product;
and comparing the first product with the second product to obtain a vector distance.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
comparing each vector distance with a preset distance threshold, and obtaining a matching result corresponding to each vector distance according to the comparison result; wherein the matching results comprise similarity and dissimilarity;
and according to the matching result corresponding to each vector distance, taking a single image which is not similar to other single images as the close-up image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
determining a single image corresponding to the similar vector distance as the matching result as the panoramic image;
and screening the panoramic image from the plurality of single images in the evidence chain image to obtain the close-up image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
taking two single images corresponding to the vector distances with the matching results being dissimilar as a single image pair;
and taking a single image which is commonly existed in a plurality of single image pairs as the close-up image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring a plurality of evidence chain training images; wherein each evidence chain training image comprises one training close-up image and at least two training panoramic images;
generating a plurality of training positive samples and a plurality of training negative samples according to the plurality of evidence chain training images; the training positive sample comprises a training close-up image and a training panoramic image in the same evidence chain training image, and the training negative sample comprises two different training panoramic images in the same evidence chain training image;
inputting a plurality of training positive samples and a plurality of training negative samples into a preset initial recognition model, and training the initial recognition model by adopting a preset loss function to obtain the recognition model; wherein the initial recognition model is a neural network model.
It should be clear that, in the embodiments of the present application, the process of executing the computer program by the processor is consistent with the process of executing the steps in the above method, and specific reference may be made to the description above.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring an evidence chain image; wherein the evidence chain image is composed of a plurality of single images, and the plurality of single images comprise a close-up image and at least two panoramic images;
and processing the evidence chain image by adopting a preset identification model based on the difference between any two single images, and screening the evidence chain image to obtain the sketch map.
In one embodiment, the recognition model is a neural network model trained by using a plurality of training positive samples and a plurality of training negative samples, the training positive samples comprise different panoramas in the evidence chain training images, and the training negative samples comprise training panoramas and training close-ups in the evidence chain training images.
In one embodiment, the computer program when executed by the processor further performs the steps of:
disassembling the evidence chain image to obtain a plurality of single images;
extracting an image vector of each single image;
performing vector distance acquisition operation on any two image vectors to obtain a plurality of vector distances; wherein the vector distance obtaining operation is an operation of determining a degree of difference between images based on a distance between image vectors;
and screening the plurality of single images according to the plurality of vector distances to obtain the sketch map.
In one embodiment, the vector distance acquisition operation comprises:
multiplying a first image vector of the first image and a second image vector of the second image to obtain a first product;
multiplying the determinant of the first image vector and the determinant of the second image vector to obtain a second product;
and comparing the first product with the second product to obtain a vector distance.
In one embodiment, the computer program when executed by the processor further performs the steps of:
comparing each vector distance with a preset distance threshold, and obtaining a matching result corresponding to each vector distance according to the comparison result; wherein the matching results comprise similarity and dissimilarity;
and according to the matching result corresponding to each vector distance, taking a single image which is not similar to other single images as the close-up image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining a single image corresponding to the similar vector distance as the matching result as the panoramic image;
and screening the panoramic image from the plurality of single images in the evidence chain image to obtain the close-up image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
taking two single images corresponding to the vector distances with the matching results being dissimilar as a single image pair;
and taking a single image which is commonly existed in a plurality of single image pairs as the close-up image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a plurality of evidence chain training images; wherein each evidence chain training image comprises one training close-up image and at least two training panoramic images;
generating a plurality of training positive samples and a plurality of training negative samples according to the plurality of evidence chain training images; the training positive sample comprises a training close-up image and a training panoramic image in the same evidence chain training image, and the training negative sample comprises two different training panoramic images in the same evidence chain training image;
inputting a plurality of training positive samples and a plurality of training negative samples into a preset initial recognition model, and training the initial recognition model by adopting a preset loss function to obtain the recognition model; wherein the initial recognition model is a neural network model.
It should be clear that, in the embodiments of the present application, the process of executing the computer program by the processor is consistent with the process of executing the steps in the above method, and specific reference may be made to the description above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for screening a sketch in an evidence chain image is characterized by comprising the following steps:
acquiring an evidence chain image; wherein the evidence chain image is composed of a plurality of single images, and the plurality of single images comprise a close-up image and at least two panoramic images;
and processing the evidence chain image by adopting a preset identification model based on the difference between any two single images, and screening the evidence chain image to obtain the sketch map.
2. The method of claim 1, wherein the recognition model is a neural network model trained using a plurality of training positive samples and a plurality of training negative samples, the training positive samples comprising different panoramas in the evidence chain training images, and the training negative samples comprising training panoramas and training close-ups in the evidence chain training images.
3. The method according to claim 1 or 2, wherein the processing the evidence chain image based on the difference between any two single images by using a preset recognition model, and the screening of the close-up graph from the evidence chain image comprises:
disassembling the evidence chain image to obtain a plurality of single images;
extracting an image vector of each single image;
performing vector distance acquisition operation on any two image vectors to obtain a plurality of vector distances; wherein the vector distance obtaining operation is an operation of determining a degree of difference between images based on a distance between image vectors;
and screening the plurality of single images according to the plurality of vector distances to obtain the sketch map.
4. The method of claim 3, wherein the vector distance obtaining operation comprises:
multiplying a first image vector of the first image and a second image vector of the second image to obtain a first product;
multiplying the determinant of the first image vector and the determinant of the second image vector to obtain a second product;
and comparing the first product with the second product to obtain a vector distance.
5. The method of claim 3, wherein said filtering said feature map from said plurality of single images according to said plurality of vector distances comprises:
comparing each vector distance with a preset distance threshold, and obtaining a matching result corresponding to each vector distance according to the comparison result; wherein the matching results comprise similarity and dissimilarity;
and according to the matching result corresponding to each vector distance, taking a single image which is not similar to other single images as the close-up image.
6. The method of claim 5, wherein said using as said close-up image a single image that is not similar to any other single image according to the matching result corresponding to each of said vector distances comprises:
determining a single image corresponding to the similar vector distance as the matching result as the panoramic image;
and screening the panoramic image from the plurality of single images in the evidence chain image to obtain the close-up image.
7. The method of claim 5, wherein said using as said close-up image a single image that is not similar to any other single image according to the matching result corresponding to each of said vector distances comprises:
taking two single images corresponding to the vector distances with the matching results being dissimilar as a single image pair;
and taking a single image which is commonly existed in a plurality of single image pairs as the close-up image.
8. The method of claim 1, wherein the training process of the recognition model comprises:
acquiring a plurality of evidence chain training images; wherein each evidence chain training image comprises one training close-up image and at least two training panoramic images;
generating a plurality of training positive samples and a plurality of training negative samples according to the plurality of evidence chain training images; the training positive sample comprises a training close-up image and a training panoramic image in the same evidence chain training image, and the training negative sample comprises two different training panoramic images in the same evidence chain training image;
inputting a plurality of training positive samples and a plurality of training negative samples into a preset initial recognition model, and training the initial recognition model by adopting a preset loss function to obtain the recognition model; wherein the initial recognition model is a neural network model.
9. An apparatus for screening sketch maps in evidence chain images, the apparatus comprising:
the acquisition module is used for acquiring an evidence chain image; wherein the evidence chain image is composed of a plurality of single images, and the plurality of single images comprise a close-up image and at least two panoramic images;
and the processing module is used for processing the evidence chain image by adopting a preset identification model based on the difference between any two single images and screening the evidence chain image to obtain the sketch.
10. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 8 when executing the computer program.
CN201911017067.5A 2019-10-24 2019-10-24 Method, device and equipment for screening sketch in evidence chain image Pending CN110766077A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911017067.5A CN110766077A (en) 2019-10-24 2019-10-24 Method, device and equipment for screening sketch in evidence chain image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911017067.5A CN110766077A (en) 2019-10-24 2019-10-24 Method, device and equipment for screening sketch in evidence chain image

Publications (1)

Publication Number Publication Date
CN110766077A true CN110766077A (en) 2020-02-07

Family

ID=69333351

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911017067.5A Pending CN110766077A (en) 2019-10-24 2019-10-24 Method, device and equipment for screening sketch in evidence chain image

Country Status (1)

Country Link
CN (1) CN110766077A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340811A (en) * 2020-02-19 2020-06-26 浙江大华技术股份有限公司 Method and device for splitting violation synthetic graph and computer storage medium
CN111340837A (en) * 2020-02-18 2020-06-26 上海眼控科技股份有限公司 Image processing method, device, equipment and storage medium
CN112365465A (en) * 2020-11-09 2021-02-12 浙江大华技术股份有限公司 Method and apparatus for determining type of synthesized image, storage medium, and electronic apparatus

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140156657A1 (en) * 2012-12-05 2014-06-05 Siamese Systems Incorporated System and method for documenting evidence
CN105632183A (en) * 2016-01-27 2016-06-01 福建工程学院 Vehicle violation behavior proof method and system thereof
CN106033549A (en) * 2015-03-16 2016-10-19 北京大学 Reordering method in vehicle retrieval and apparatus thereof
CN106600977A (en) * 2017-02-13 2017-04-26 深圳英飞拓科技股份有限公司 Parking violation detection method and system based on multi-feature identification
CN107729502A (en) * 2017-10-18 2018-02-23 公安部第三研究所 A kind of bayonet vehicle individualized feature intelligent retrieval system and method
CN108197326A (en) * 2018-02-06 2018-06-22 腾讯科技(深圳)有限公司 A kind of vehicle retrieval method and device, electronic equipment, storage medium
CN108764068A (en) * 2018-05-08 2018-11-06 北京大米科技有限公司 A kind of image-recognizing method and device
CN108932851A (en) * 2018-06-22 2018-12-04 安徽科力信息产业有限责任公司 A kind of method and device recording the behavior of motor vehicle illegal parking
CN109426769A (en) * 2017-08-24 2019-03-05 合肥虹慧达科技有限公司 The iris identification method and iris authentication system of face auxiliary
WO2019186530A1 (en) * 2018-03-29 2019-10-03 Uveye Ltd. Method of vehicle image comparison and system thereof

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140156657A1 (en) * 2012-12-05 2014-06-05 Siamese Systems Incorporated System and method for documenting evidence
CN106033549A (en) * 2015-03-16 2016-10-19 北京大学 Reordering method in vehicle retrieval and apparatus thereof
CN105632183A (en) * 2016-01-27 2016-06-01 福建工程学院 Vehicle violation behavior proof method and system thereof
CN106600977A (en) * 2017-02-13 2017-04-26 深圳英飞拓科技股份有限公司 Parking violation detection method and system based on multi-feature identification
CN109426769A (en) * 2017-08-24 2019-03-05 合肥虹慧达科技有限公司 The iris identification method and iris authentication system of face auxiliary
CN107729502A (en) * 2017-10-18 2018-02-23 公安部第三研究所 A kind of bayonet vehicle individualized feature intelligent retrieval system and method
CN108197326A (en) * 2018-02-06 2018-06-22 腾讯科技(深圳)有限公司 A kind of vehicle retrieval method and device, electronic equipment, storage medium
WO2019186530A1 (en) * 2018-03-29 2019-10-03 Uveye Ltd. Method of vehicle image comparison and system thereof
CN108764068A (en) * 2018-05-08 2018-11-06 北京大米科技有限公司 A kind of image-recognizing method and device
CN108932851A (en) * 2018-06-22 2018-12-04 安徽科力信息产业有限责任公司 A kind of method and device recording the behavior of motor vehicle illegal parking

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
贾巧丽: "基于内容的图像检索技术研究", 《硕士学位论文电子期刊》 *
郑凯: "武汉市智能交通系统路口检测子系统的设计与实现", 《硕士学位论文电子期刊》 *
郑耀: "违章停车取证关键技术研究", 《硕士学位论文电子期刊》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340837A (en) * 2020-02-18 2020-06-26 上海眼控科技股份有限公司 Image processing method, device, equipment and storage medium
CN111340811A (en) * 2020-02-19 2020-06-26 浙江大华技术股份有限公司 Method and device for splitting violation synthetic graph and computer storage medium
CN111340811B (en) * 2020-02-19 2023-08-11 浙江大华技术股份有限公司 Resolution method, device and computer storage medium for violation synthetic graph
CN112365465A (en) * 2020-11-09 2021-02-12 浙江大华技术股份有限公司 Method and apparatus for determining type of synthesized image, storage medium, and electronic apparatus
CN112365465B (en) * 2020-11-09 2024-02-06 浙江大华技术股份有限公司 Synthetic image category determining method and device, storage medium and electronic device

Similar Documents

Publication Publication Date Title
CN109344742B (en) Feature point positioning method and device, storage medium and computer equipment
CN109034078B (en) Training method of age identification model, age identification method and related equipment
EP2676224B1 (en) Image quality assessment
CN110163193B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN110516665A (en) Identify the neural network model construction method and system of image superposition character area
CN110889428A (en) Image recognition method and device, computer equipment and storage medium
KR101781358B1 (en) Personal Identification System And Method By Face Recognition In Digital Image
CN109325429A (en) A kind of method, apparatus, storage medium and the terminal of linked character data
CN110766077A (en) Method, device and equipment for screening sketch in evidence chain image
CN109299658B (en) Face detection method, face image rendering device and storage medium
CN111667001B (en) Target re-identification method, device, computer equipment and storage medium
JP2010045613A (en) Image identifying method and imaging device
CN111160275B (en) Pedestrian re-recognition model training method, device, computer equipment and storage medium
CN111191568A (en) Method, device, equipment and medium for identifying copied image
CN111666922A (en) Video matching method and device, computer equipment and storage medium
CN110826484A (en) Vehicle weight recognition method and device, computer equipment and model training method
CN110969045B (en) Behavior detection method and device, electronic equipment and storage medium
CN112668462B (en) Vehicle damage detection model training, vehicle damage detection method, device, equipment and medium
CN111444808A (en) Image-based accident liability assignment method and device, computer equipment and storage medium
CN109308704B (en) Background eliminating method, device, computer equipment and storage medium
CN110942456B (en) Tamper image detection method, device, equipment and storage medium
CN111881740A (en) Face recognition method, face recognition device, electronic equipment and medium
WO2022082401A1 (en) Noseprint recognition method and apparatus for pet, computer device, and storage medium
CN112836682A (en) Method and device for identifying object in video, computer equipment and storage medium
CN111178162B (en) Image recognition method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned

Effective date of abandoning: 20230224

AD01 Patent right deemed abandoned