CN110502975A - A kind of batch processing system that pedestrian identifies again - Google Patents

A kind of batch processing system that pedestrian identifies again Download PDF

Info

Publication number
CN110502975A
CN110502975A CN201910616631.9A CN201910616631A CN110502975A CN 110502975 A CN110502975 A CN 110502975A CN 201910616631 A CN201910616631 A CN 201910616631A CN 110502975 A CN110502975 A CN 110502975A
Authority
CN
China
Prior art keywords
image
batch
image data
processed
gpu
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910616631.9A
Other languages
Chinese (zh)
Other versions
CN110502975B (en
Inventor
郭玲玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910616631.9A priority Critical patent/CN110502975B/en
Publication of CN110502975A publication Critical patent/CN110502975A/en
Application granted granted Critical
Publication of CN110502975B publication Critical patent/CN110502975B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Human Computer Interaction (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of batch processing systems that pedestrian identifies again, are related to technical field of data processing, to solve the problems, such as that pedestrian identifies in the prior art batch processing is slow-footed and invents again.The system specifically includes that central processor CPU and graphics processor GPU;The image data of image to be processed is read in memory by CPU;Image data is carried out matrix splicing and generates batch image data by CPU, and batch image data saves in memory;Batch image data is read in video memory by GPU;Batch image data is inputted residual error neural network model by GPU, extracts the batch images feature of batch image data, and batch images feature is stored in video memory;Batch images feature is read in memory by GPU;CPU by batch images feature decomposition be with the one-to-one image feature vector of image to be processed, in order to identify image to be processed.The system is mainly used in during pedestrian identifies again.

Description

A kind of batch processing system that pedestrian identifies again
Technical field
The present invention relates to a kind of data technique fields, more particularly to a kind of batch processing system that pedestrian identifies again.
Background technique
Pedestrian identifies the technology for just referring to and judging to whether there is specific pedestrian in image or video sequence again.At present relatively Popular way is one deep learning model of training, and input is pedestrian's picture, and output is a feature vector, then according to spy The similitude of sign vector judges whether it is the same person.When terminal or server-side progress pedestrian identify again, generally include by CPU The image data of every picture is read in memory, then the image data in memory is read in video memory, GPU by the driver of GPU It is calculated using the image data in video memory, and the picture feature data being calculated is stored in video memory, then by GPU Picture feature data in video memory are read in memory by driver, and last CPU obtains the picture feature data in memory.
Before GPU calculating, need first to move into video memory image from memory, between memory and video memory a data Time T consumed by exchanging is made of two parts: the time T<data>of data exchange and additional time T<ext itself >, i.e. T=T<data>+T<ext>.Assuming that data exchange amount is N, then the time complexity relationship of T<data>and N be T< Data>~O (N), but T<ext>is unrelated with N, it is related with the number of data exchange, time complexity relationship be T<ext>~ O(1).If data exchange amount is that N and a point n times swap, the time complexity relationship of T<data>and N be T<data>~ The time complexity relationship of O (N), T<ext>and n are T<ext>~O (n), so, total time T~O required for data exchange (N)+O (n), so, when data exchange amount N is constant, reduce the number of transmissions n, so that it may reduce total time T.
Due to first image data is moved in the video memory that can directly access to GPU from memory before the computation, this is removed It moves processing to need to take additional time, cause when pedestrian's quantity to be treated is very big, the time of waste is relatively more, cannot The computing capability for making full use of GPU, the speed for causing batch processing pedestrian to identify again are slower.
Summary of the invention
In view of this, the present invention provides a kind of batch processing system that pedestrian identifies again, main purpose is to solve existing The slow-footed problem of the batch processing that pedestrian identifies again in technology.
According to the present invention on one side, a kind of batch processing system that pedestrian identifies again, including central processing unit are provided CPU and graphics processor GPU;
The image data of image to be processed is read in memory by the CPU;
The image data is carried out matrix splicing and generates batch image data by the CPU, and the batch image data is protected It deposits in the memory;
The batch image data is read in video memory by the GPU;
The batch image data is inputted residual error neural network model by the GPU, extracts the batch image data Batch images feature, and the batch images feature is stored in the video memory;
The batch images feature is read in the memory by the GPU;
The CPU by the batch images feature decomposition be with the one-to-one characteristics of image of image to be processed to Amount, in order to identify the image to be processed.
According to another aspect of the invention, a kind of storage medium is provided, at least one is stored in the storage medium can It executes instruction, the executable instruction makes processor execute the corresponding operation of batch processing system identified again such as above-mentioned pedestrian.
In accordance with a further aspect of the present invention, a kind of computer equipment is provided, comprising: processor, memory, communication interface And communication bus, the processor, the memory and the communication interface complete mutual lead to by the communication bus Letter;
For the memory for storing an at least executable instruction, it is above-mentioned that the executable instruction executes the processor The corresponding operation of the batch processing system that pedestrian identifies again.
By above-mentioned technical proposal, technical solution provided in an embodiment of the present invention is at least had the advantage that
The present invention provides a kind of batch processing systems that pedestrian identifies again, including central processor CPU and graphics process The image data of image to be processed is read in memory by device GPU, first CPU, and then the image data is carried out matrix splicing by CPU Batch image data is generated, then the batch image data is read in video memory by GPU, and then GPU is by the batch image data Residual error neural network model is inputted, extracts the batch images feature of the batch image data, and by the batch images feature It is stored in the video memory, then the batch images feature is read in the memory by GPU, and last CPU is by the batch images Feature decomposition be with the one-to-one image feature vector of image to be processed, in order to identify the image to be processed.With The prior art is compared, and the embodiment of the present invention disposably reads in video memory in a matrix by splicing the data of multiple images, so It is disposable afterwards to extract the characteristics of image for reading in all images in video memory, realize that pedestrian identifies figure again by saving transmission time The batch processing of picture improves the recognition speed that pedestrian identifies again.
The above description is only an overview of the technical scheme of the present invention, in order to better understand the technical means of the present invention, And it can be implemented in accordance with the contents of the specification, and in order to allow above and other objects of the present invention, feature and advantage can It is clearer and more comprehensible, the followings are specific embodiments of the present invention.
Detailed description of the invention
By reading the following detailed description of the preferred embodiment, various other advantages and benefits are common for this field Technical staff will become clear.The drawings are only for the purpose of illustrating a preferred embodiment, and is not considered as to the present invention Limitation.And throughout the drawings, the same reference numbers will be used to refer to the same parts.In the accompanying drawings:
Fig. 1 shows the batch processing system flow chart that a kind of pedestrian provided in an embodiment of the present invention identifies again;
Fig. 2 shows the batch processing system flow charts that another pedestrian provided in an embodiment of the present invention identifies again;
Fig. 3 shows a kind of structural schematic diagram of computer equipment provided in an embodiment of the present invention.
Specific embodiment
Exemplary embodiments of the present disclosure are described in more detail below with reference to accompanying drawings.Although showing the disclosure in attached drawing Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here It is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure It is fully disclosed to those skilled in the art.
The embodiment of the invention provides a kind of batch processing system that pedestrian identifies again, as shown in Figure 1, during the system includes Central processor CPU and graphics processor GPU.
Central processor CPU is one piece of ultra-large integrated circuit, is the arithmetic core and control of an electronic equipment Core is mainly used for data processing.Graphics processor GPU is a kind of microprocessor dedicated for image operation work.Due to Pedestrian identifies to be actually identification image data again, so handling image data using CPU and GPU simultaneously, carries out pedestrian and knows again Not.
101, the image data of image to be processed is read in memory by CPU.
Image to be processed, which refers to, needs to carry out all images that pedestrian identifies again.CPU is by the image data of image to be processed Memory is read in, refers to that instruction is transmitted to memory by CPU, memory finds image data and its hard-disc storage position according to instruction, then Image data is read from hard disk and is saved in memory.
102, the image data is carried out matrix splicing and generates batch image data by CPU, and the batch image data is protected It deposits in the memory.
Before matrix splicing, whether the image size for needing to detect image is identical, if image size is identical It directly is ready for splicing, then splices image correction to be processed again for same size if image is of different sizes. According to the image size and video memory capacity of image to be processed, the maximum number that can be moved in simultaneously is calculated.According to be processed The quantity and maximum number of image, image to be processed is divided into it is one or more groups of, then by the figure to be processed in each group Batch image data is generated as carrying out matrix splicing.
Matrix splicing is realized using the matrix splicing function in programming language, it is assumed that the image of individual image to be processed Data are the matrixes of a c*h*w, and matrix splicing, which refers to, is merged into the same matrix for n images to be processed for belonging to same group In, belong to same group of n images to be processed and is spliced according to first axle.
103, the batch image data is read in video memory by GPU.
Product image data is read in video memory by GPU, refers to one group of image to be processed carrying out the batch figure that matrix splicing generates As data are stored in video memory.If image to be processed is divided into multiple groups, need by repeatedly reading in.
For hardware view, data exchange is carried out between memory and video memory generally by PCI-E bus, from software It is usually that application software calls the driver of GPU to complete data exchange for level.GPU is relatively good at carry out single instruction multiple According to calculating, for example carry out may there was only 2 times the time required to 10 multiplyings the time required to 1000 multiplyings simultaneously, and It is not 100 times, so the same operation that GPU can be allowed once to carry out is more, bring calculates the income that the time is saved and will get over It is high.In pedestrian's weight identification process, needs to extract the characteristics of image of image to be processed, exist during extracting characteristics of image A large amount of additions and multiplications, so the pedestrian image quantity of single treatment is more, then the multiplication that can carry out simultaneously or The quantity of add operation is also more, and the time of saving is also more.
104, the batch image data is inputted residual error neural network model by GPU, extracts the batch image data Batch images feature, and the batch images feature is stored in the video memory.
Residual error neural network model, refers to the neural network model with residual error structure, and residual error structure can be with accelerans The training of network, while the accuracy rate of model being provided.After extracting batch images feature, corrected by Softmax loss function The model parameter of residual error neural network model, to improve the accuracy for extracting batch images feature.Batch images are characterized in batch The corresponding characteristics of image of image data, including by matrix splicing image to be handled property feature.
105, the batch images feature is read in the memory by GPU.
After the batch images feature in video memory reads in memory, if there are also untreated batch image datas, continue Step 103 is executed, that is, repeats step 103, step 104 and step 105, until extracting criticizing for institute's image to be handled Measure characteristics of image.
106, CPU by the batch images feature decomposition be with the one-to-one characteristics of image of image to be processed to Amount, in order to identify the image to be processed.
It is corresponding with matrix splicing, in this step to batch images feature carry out matrix splicing inverse operation, obtain with The one-to-one image feature vector of image to be processed.
The present invention provides a kind of batch processing systems that pedestrian identifies again, including central processor CPU and graphics process The image data of image to be processed is read in memory by device GPU, first CPU, and then the image data is carried out matrix splicing by CPU Batch image data is generated, then the batch image data is read in video memory by GPU, and then GPU is by the batch image data Residual error neural network model is inputted, extracts the batch images feature of the batch image data, and by the batch images feature It is stored in the video memory, then the batch images feature is read in the memory by GPU, and last CPU is by the batch images Feature decomposition be with the one-to-one image feature vector of image to be processed, in order to identify the image to be processed.With The prior art is compared, and the embodiment of the present invention disposably reads in video memory in a matrix by splicing the data of multiple images, so It is disposable afterwards to extract the characteristics of image for reading in all images in video memory, realize that pedestrian identifies figure again by saving transmission time The batch processing of picture improves the recognition speed that pedestrian identifies again.
The embodiment of the invention provides the batch processing systems that another pedestrian identifies again, as shown in Fig. 2, the system includes Central processor CPU and graphics processor GPU.
Central processor CPU is one piece of ultra-large integrated circuit, is the arithmetic core and control of an electronic equipment Core is mainly used for data processing.Graphics processor GPU is a kind of microprocessor dedicated for image operation work.Due to Pedestrian identifies to be actually identification image data again, so handling image data using CPU and GPU simultaneously, carries out pedestrian and knows again Not.
201, the image data of image to be processed is read in memory by CPU.
Memory size for saving image to be processed is limited, when image to be processed is more exceeds memory size, It needs image to be processed importing memory in batches.Since the key point of this programme is how to use at GPU after reading in video memory Data are managed, and memory size is usually bigger than video memory capacity, so to the figure to be processed of the batch processing as caused by low memory As the case where do not do detailed analysis.
202, the image data is carried out matrix splicing and generates batch image data by CPU, and the batch image data is protected It deposits in the memory.
Before matrix splicing, whether the image size for needing to detect image is identical, if image size is identical It directly is ready for splicing, then splices image correction to be processed again for same size if image is of different sizes. Specific splicing includes: to obtain batch identification parameter, and the batch identification parameter includes video memory capacity, picture size, node Quantity and feature vector dimension;According to the batch identification parameter, image to be spliced is chosen from the image to be processed;According to First axle splices the image to be spliced, generates the batch image data.
Video memory capacity is determined by hardware configuration, is fixed value, such as 128MB, 256MB, 512MB, 1024MB. Picture size refers to the input matrix size of neural network model used by extracting characteristics of image.Number of nodes refers to extraction The node number of neural network model used by characteristics of image, calculating during extracting characteristics of image between each node Parameter is shared, but cannot be shared for saving the scratchpad area (SPA) for calculating and recording a demerit, so number of nodes is also choosing Take the parameter of image to be spliced.Feature vector dimension refers to the size of output matrix.If there is n images to be processed will have n Scratchpad area (SPA) again, and the size of scratchpad area (SPA) is directly proportional to number of network node.Feature vector dimension is also similar, n to Processing image just has n output feature vector and needs to store.So picture size, number of nodes, feature vector dimension are to n's Influence is arranged side by side.
Image to be spliced is chosen according to batch identification parameter, is specifically included: according to the batch identification parameter, according to preset Computation rule calculates splicing picture number, and the preset computation rule is n ≈ M/ (c*h*w*s1+N*s2+f*s3), wherein M is institute Video memory capacity is stated, c*h*w is described image size, s1For the unit memory capacity of described image size, N is the number of nodes Amount, s2For the unit capacity of the number of nodes, f is described eigenvector dimension, s3Hold for the unit of described eigenvector dimension Amount;Judge whether the quantity of the image to be processed is greater than the splicing picture number;If it is judged that being no, it is determined that complete The image to be processed in portion is image to be spliced;If it is judged that be it is yes, then according to the image reading to be processed sequence, The first image to be spliced of the picture number to be spliced is chosen from the image to be processed, and will be remaining described to be processed Image is redefined as the image to be processed.Wherein s1、s2And s3Value depending on selected data format, such as The use of value when single precision floating datum is 4, the use of value when double-precision floating points is 8.In practical work process, need aobvious Additional reserved space in depositing, to guarantee the normal operation of system, but reserved space is to the calculated result of stitching image data Very little is influenced, so ignoring influence of the reserved space to stitching image quantity herein.
Splice according to first axle and generate batch image data, specifically includes: in capable of mentioning for currently used programming language In the function of confession, splicing function is searched, the splicing function can splice the image to be spliced according to first axle;Will it is described to Stitching image inputs the splicing function, generates the batch image data.The feature vector of individual image to be processed is 1*f Matrix, if spliced according to zero axle, since the size of f dimensional vector is uncertain, so needed when splitting it is continuous more Change fractionation parameter, is unfavorable for quickly splitting the feature vector of single image, if spliced according to first axle due to single image The matrix line number of feature vector is all 1, so traversal first axle can be taken off the feature with each single image object when splitting Vector.To sum up, for the ease of splitting, so being spliced when splicing the image data of image to be processed according to first axle.
203, the batch image data is read in video memory by GPU.
204, the batch image data is inputted residual error neural network model by GPU, extracts the batch image data Batch images feature, and the batch images feature is stored in the video memory.
Before extracting batch images feature using residual error neural network model, further includes: the GPU is defeated by training image Enter the residual error neural network model, calculates the training feature vector of the training data;The GPU is according to preset loss letter Number, calculates the deviation of the actual characteristic vector of the training feature vector and the training image;The GPU is according to described inclined Difference calculates feedback tuning parameter;The GPU corrects the residual error neural network model according to the feedback tuning parameter.In It inputs before training data, further includes: the image data of the training image is carried out matrix splicing by the CPU, generates training Image data.
Residual error neural network is a kind of depth convolutional neural networks, similar with other artificial neural networks, is equivalent to one Mapping function of the kind from input matrix to output matrix is learnt, finding makes in the training stage by great amount of samples by supervised The smallest parameter of deviation is obtained, and uses the parameter extraction batch images feature.Batch images feature after extraction is stored in In video memory.
205, the batch images feature is read in the memory by GPU.
206, CPU by the batch images feature decomposition be with the one-to-one characteristics of image of image to be processed to Amount, in order to identify the image to be processed.
The isolation of batch images feature is thought corresponding with the method that matrix splices, and specifically includes: traversing the batch figure As the first axle of feature, successively extract the row vector of every a line of the batch images feature, the row vector be with it is described The one-to-one image feature vector of image to be processed.
Batch images feature is extracted by row namely, since the 0th row, extracts data line every time as one wait locate Manage the image feature vector of image.The sequence of the image feature vector extracted and the splicing for the image to be processed that matrix splices are suitable Sequence is identical.
207, CPU calculates the feature vector of target pedestrian and the similarity score of described image feature vector.
Similarity score can be cosine similarity score, be capable of metric objective pedestrian feature vector and characteristics of image to The difference of amount.
If 208, the CPU similarity score is greater than preset threshold value, the target pedestrian and the image to be processed For same a group traveling together.
Judge whether image to be processed and target pedestrian are same a group traveling together, in order to track the particular row in tape handling image People.
The present invention provides a kind of batch processing systems that pedestrian identifies again, including central processor CPU and graphics process The image data of image to be processed is read in memory by device GPU, first CPU, and then the image data is carried out matrix splicing by CPU Batch image data is generated, then the batch image data is read in video memory by GPU, and then GPU is by the batch image data Residual error neural network model is inputted, extracts the batch images feature of the batch image data, and by the batch images feature It is stored in the video memory, then the batch images feature is read in the memory by GPU, and last CPU is by the batch images Feature decomposition be with the one-to-one image feature vector of image to be processed, in order to identify the image to be processed.With The prior art is compared, and the embodiment of the present invention disposably reads in video memory in a matrix by splicing the data of multiple images, so It is disposable afterwards to extract the characteristics of image for reading in all images in video memory, realize that pedestrian identifies figure again by saving transmission time The batch processing of picture improves the recognition speed that pedestrian identifies again.
A kind of storage medium is provided according to an embodiment of the present invention, and it is executable that the storage medium is stored at least one The batch processing system that the pedestrian in above-mentioned any embodiment identifies again can be performed in instruction, the computer executable instructions.
Fig. 3 shows a kind of structural schematic diagram of the computer equipment provided according to an embodiment of the present invention, the present invention Specific embodiment does not limit the specific implementation of computer equipment.
As shown in figure 3, the computer equipment may include: processor (processor) 302, communication interface (Communications Interface) 304, memory (memory) 306 and communication bus 308.
Wherein: processor 302, communication interface 304 and memory 306 complete mutual lead to by communication bus 308 Letter.
Communication interface 304, for being communicated with the network element of other equipment such as client or other servers etc..
It is real can specifically to execute the batch processing system that above-mentioned pedestrian identifies again for executing program 310 for processor 302 Apply the correlation step in example.
Specifically, program 310 may include program code, which includes computer operation instruction.
Processor 302 may be central processor CPU or specific integrated circuit ASIC (Application Specific Integrated Circuit), or be arranged to implement the integrated electricity of one or more of the embodiment of the present invention Road.The one or more processors that computer equipment includes can be same type of processor, such as one or more CPU; It can be different types of processor, such as one or more CPU and one or more ASIC.
Memory 306, for storing program 310.Memory 306 may include high speed RAM memory, it is also possible to further include Nonvolatile memory (non-volatile memory), for example, at least a magnetic disk storage.
Program 310 specifically can be used for so that processor 302 executes following operation:
Including central processor CPU and graphics processor GPU;
The image data of image to be processed is read in memory by the CPU;
The image data is carried out matrix splicing and generates batch image data by the CPU, and the batch image data is protected It deposits in the memory;
The batch image data is read in video memory by the GPU;
The batch image data is inputted residual error neural network model by the GPU, extracts the batch image data Batch images feature, and the batch images feature is stored in the video memory;
The batch images feature is read in the memory by the GPU;
The CPU by the batch images feature decomposition be with the one-to-one characteristics of image of image to be processed to Amount, in order to identify the image to be processed.
Obviously, those skilled in the art should be understood that each module of the above invention or each step can be with general Computing device realize that they can be concentrated on a single computing device, or be distributed in multiple computing devices and formed Network on, optionally, they can be realized with the program code that computing device can perform, it is thus possible to which they are stored It is performed by computing device in the storage device, and in some cases, it can be to be different from shown in sequence execution herein Out or description the step of, perhaps they are fabricated to each integrated circuit modules or by them multiple modules or Step is fabricated to single integrated circuit module to realize.In this way, the present invention is not limited to any specific hardware and softwares to combine.
The foregoing is only a preferred embodiment of the present invention, is not intended to restrict the invention, for the skill of this field For art personnel, the invention may be variously modified and varied.All within the spirits and principles of the present invention, made any to repair Change, equivalent replacement, improvement etc., should all include within protection scope of the present invention.

Claims (10)

1. a kind of batch processing system that pedestrian identifies again, which is characterized in that including central processor CPU and graphics processor GPU;
The image data of image to be processed is read in memory by the CPU;
The image data is carried out matrix splicing and generates batch image data by the CPU, and the batch image data is stored in In the memory;
The batch image data is read in video memory by the GPU;
The batch image data is inputted residual error neural network model by the GPU, extracts the batch of the batch image data Characteristics of image, and the batch images feature is stored in the video memory;
The batch images feature is read in the memory by the GPU;
The CPU by the batch images feature decomposition be with the one-to-one image feature vector of image to be processed, with Convenient for identifying the image to be processed.
2. the system as claimed in claim 1, which is characterized in that described that the image data is carried out matrix splicing generation batch Image data, comprising:
Batch identification parameter is obtained, the batch identification parameter includes video memory capacity, picture size, number of nodes and feature vector Dimension;
According to the batch identification parameter, image to be spliced is chosen from the image to be processed;
Splice the image to be spliced according to first axle, generates the batch image data.
3. system as claimed in claim 2, which is characterized in that it is described according to the batch identification parameter, from described to be processed Image to be spliced is chosen in image, comprising:
According to the batch identification parameter, splicing picture number is calculated according to preset computation rule, the preset computation rule is n ≈M/(c*h*w*s1+N*s2+f*s3), wherein M is the video memory capacity, and c*h*w is described image size, s1For described image The unit memory capacity of size, N are the number of nodes, s2For the unit capacity of the number of nodes, f is described eigenvector Dimension, s3For the unit capacity of described eigenvector dimension;
Judge whether the quantity of the image to be processed is greater than the splicing picture number;
If it is judged that being no, it is determined that whole images to be processed is image to be spliced;
If it is judged that be it is yes, then according to the image reading to be processed sequence, chosen from the image to be processed described in The image to be spliced of the first of picture number to be spliced, and the remaining image to be processed is redefined as the figure to be processed Picture.
4. system as claimed in claim 2, which is characterized in that it is described to splice the image to be spliced according to first axle, it generates The batch image data, comprising:
In the function that can be provided of currently used programming language, splicing function is searched, the splicing function can be according to First axle splices the image to be spliced;
The image to be spliced is inputted into the splicing function, generates the batch image data.
5. the system as claimed in claim 1, which is characterized in that described that the batch image data is inputted residual error neural network Model, before the batch images feature for extracting the batch image data, the system also includes:
Training image is inputted the residual error neural network model by the GPU, calculates the training feature vector of the training data;
The GPU calculates the actual characteristic vector of the training feature vector and the training image according to preset loss function Deviation;
The GPU calculates feedback tuning parameter according to the deviation;
The GPU corrects the residual error neural network model according to the feedback tuning parameter.
6. system as claimed in claim 5, which is characterized in that described by residual error neural network model described in training image, meter Before the training feature vector for calculating the training data, the system also includes:
The image data of the training image is carried out matrix splicing by the CPU, generates training image data.
7. system as claimed in claim 2, which is characterized in that it is described by the batch images feature decomposition be with described wait locate Manage the one-to-one image feature vector of image, comprising:
The first axle for traversing the batch images feature successively extracts the row vector of every a line of the batch images feature, institute Stating row vector is and the one-to-one image feature vector of image to be processed.
8. the system as claimed in claim 1, which is characterized in that it is described by the batch images feature decomposition be with described wait locate After managing the one-to-one image feature vector of image, the system also includes:
The CPU calculates the feature vector of target pedestrian and the similarity score of described image feature vector;
If the CPU similarity score is greater than preset threshold value, the target pedestrian and the image to be processed are same A group traveling together.
9. a kind of storage medium, it is stored with an at least executable instruction in the storage medium, the executable instruction makes to handle Device executes the corresponding operation of batch processing system identified again such as pedestrian of any of claims 1-8.
10. a kind of computer equipment, comprising: processor, memory, communication interface and communication bus, the processor described are deposited Reservoir and the communication interface complete mutual communication by the communication bus;
The memory executes the processor as right is wanted for storing an at least executable instruction, the executable instruction The corresponding operation of the batch processing system for asking pedestrian described in any one of 1-8 to identify again.
CN201910616631.9A 2019-07-09 2019-07-09 Batch processing system for pedestrian re-identification Active CN110502975B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910616631.9A CN110502975B (en) 2019-07-09 2019-07-09 Batch processing system for pedestrian re-identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910616631.9A CN110502975B (en) 2019-07-09 2019-07-09 Batch processing system for pedestrian re-identification

Publications (2)

Publication Number Publication Date
CN110502975A true CN110502975A (en) 2019-11-26
CN110502975B CN110502975B (en) 2023-06-23

Family

ID=68586201

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910616631.9A Active CN110502975B (en) 2019-07-09 2019-07-09 Batch processing system for pedestrian re-identification

Country Status (1)

Country Link
CN (1) CN110502975B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269276A (en) * 2021-06-28 2021-08-17 深圳市英威诺科技有限公司 Image recognition method, device, equipment and storage medium
CN113666073A (en) * 2021-07-22 2021-11-19 苏州华兴源创科技股份有限公司 Product transferring method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050125369A1 (en) * 2003-12-09 2005-06-09 Microsoft Corporation System and method for accelerating and optimizing the processing of machine learning techniques using a graphics processing unit
CN109214273A (en) * 2018-07-18 2019-01-15 平安科技(深圳)有限公司 Facial image comparison method, device, computer equipment and storage medium
CN109726626A (en) * 2018-09-27 2019-05-07 合肥博焱智能科技有限公司 Face identification system based on GPU
CN109740413A (en) * 2018-11-14 2019-05-10 平安科技(深圳)有限公司 Pedestrian recognition methods, device, computer equipment and computer storage medium again
CN109784166A (en) * 2018-12-13 2019-05-21 北京飞搜科技有限公司 The method and device that pedestrian identifies again
CN109902546A (en) * 2018-05-28 2019-06-18 华为技术有限公司 Face identification method, device and computer-readable medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050125369A1 (en) * 2003-12-09 2005-06-09 Microsoft Corporation System and method for accelerating and optimizing the processing of machine learning techniques using a graphics processing unit
CN109902546A (en) * 2018-05-28 2019-06-18 华为技术有限公司 Face identification method, device and computer-readable medium
CN109214273A (en) * 2018-07-18 2019-01-15 平安科技(深圳)有限公司 Facial image comparison method, device, computer equipment and storage medium
CN109726626A (en) * 2018-09-27 2019-05-07 合肥博焱智能科技有限公司 Face identification system based on GPU
CN109740413A (en) * 2018-11-14 2019-05-10 平安科技(深圳)有限公司 Pedestrian recognition methods, device, computer equipment and computer storage medium again
CN109784166A (en) * 2018-12-13 2019-05-21 北京飞搜科技有限公司 The method and device that pedestrian identifies again

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269276A (en) * 2021-06-28 2021-08-17 深圳市英威诺科技有限公司 Image recognition method, device, equipment and storage medium
CN113666073A (en) * 2021-07-22 2021-11-19 苏州华兴源创科技股份有限公司 Product transferring method
CN113666073B (en) * 2021-07-22 2022-10-14 苏州华兴源创科技股份有限公司 Product transferring method

Also Published As

Publication number Publication date
CN110502975B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
KR102170105B1 (en) Method and apparatus for generating neural network structure, electronic device, storage medium
Chen et al. Automated synthetic-to-real generalization
CN108229591A (en) Neural network adaptive training method and apparatus, equipment, program and storage medium
CN108830385B (en) Deep learning model training method and device and computer readable storage medium
CN111461164B (en) Sample data set capacity expansion method and model training method
CN110647992A (en) Training method of convolutional neural network, image recognition method and corresponding devices thereof
CN110502975A (en) A kind of batch processing system that pedestrian identifies again
CN110109543A (en) C-VEP recognition methods based on subject migration
Douillard et al. Tackling catastrophic forgetting and background shift in continual semantic segmentation
CN114186084A (en) Online multi-mode Hash retrieval method, system, storage medium and equipment
CN114840322A (en) Task scheduling method and device, electronic equipment and storage
CN116306793A (en) Self-supervision learning method with target task directivity based on comparison twin network
CN115344805A (en) Material auditing method, computing equipment and storage medium
CN113850298A (en) Image identification method and device and related equipment
CN116468895A (en) Similarity matrix guided few-sample semantic segmentation method and system
CN107977980A (en) A kind of method for tracking target, equipment and computer-readable recording medium
US20100322472A1 (en) Object tracking in computer vision
CN107992821B (en) Image identification method and system
JP7438591B2 (en) Training methods, devices and electronic devices for neural networks for image retrieval
CN107622037A (en) The method and apparatus that a kind of Matrix Multiplication for improving graphics processing unit calculates performance
CN112668639A (en) Model training method and device, server and storage medium
CN114282741A (en) Task decision method, device, equipment and storage medium
CN116187464B (en) Blind quantum computing processing method and device and electronic equipment
CN115705478A (en) Multi-agent track prediction method and device based on Kupmann theory and relation inference
CN112101563A (en) Confidence domain strategy optimization method and device based on posterior experience and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant