CN111259910A - Object extraction method and device - Google Patents

Object extraction method and device Download PDF

Info

Publication number
CN111259910A
CN111259910A CN201811453893.XA CN201811453893A CN111259910A CN 111259910 A CN111259910 A CN 111259910A CN 201811453893 A CN201811453893 A CN 201811453893A CN 111259910 A CN111259910 A CN 111259910A
Authority
CN
China
Prior art keywords
information
sampling
local
feature
level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811453893.XA
Other languages
Chinese (zh)
Inventor
沈旭
杨继伟
黄建强
华先胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201811453893.XA priority Critical patent/CN111259910A/en
Publication of CN111259910A publication Critical patent/CN111259910A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an object extraction method and device, and better local features are simply and effectively obtained, so that follow-up object identification is better guaranteed.

Description

Object extraction method and device
Technical Field
The present application relates to, but not limited to, computer technology, and more particularly, to a method and apparatus for object extraction.
Background
Feature Extraction (FE) refers to a method and a process for extracting information that is characteristic in an image using a computer.
Global features (Global features) refer to the overall properties of an image, are low-level visual features at the pixel level, and common Global features include color features, texture features, and shape features. The global features have the characteristics of good invariance, simple calculation, visual representation and the like, but the global feature description is not suitable for the situations of image mixing and occlusion. Local features (Local features) are features extracted from Local regions of an image, including edges, corners, lines, curves, and regions of special attributes, etc.
How to obtain local features better so as to guarantee subsequent applications is an urgent problem to be solved in the industry.
Disclosure of Invention
The application provides an object extraction method and device, which can obtain better local features, thereby better ensuring the identification of objects.
The embodiment of the invention provides an object extraction method, which comprises the following steps:
positioning a local characteristic region of the object according to the high-level characteristic information and the prior knowledge of the object;
sampling the positioned local characteristic areas respectively to acquire sampling information;
and respectively extracting the characteristics of the sampling information to obtain the local characteristic information of the local characteristic region.
In one illustrative example, further comprising:
and carrying out feature extraction on the object by using a neural network model to obtain the high-level feature information.
In an exemplary embodiment, the a priori knowledge is used to characterize different parts of the object, including one or more pre-obtained sub-block information for regionalizing the object.
In one illustrative example, the locating the local feature region of the object includes:
and according to different parts of the object characterized by the prior knowledge, positioning one or more local characteristic regions from the high-level characteristic information of the object.
In an illustrative example, the sampling comprises upsampling or downsampling.
In an exemplary instance, the size of the sampling information map corresponding to the sampling information is smaller than or equal to the size of the high-level feature map corresponding to the high-level feature information.
In an exemplary instance, the obtaining the local feature information of the local feature region includes:
and performing feature extraction on the sampling information graph corresponding to the sampling information by using a neural network model to acquire local feature information of the local feature area.
The present application further provides a computer-readable storage medium storing computer-executable instructions for performing any of the object extraction methods described above.
The application further provides an apparatus for implementing object extraction, comprising a memory and a processor, wherein the memory stores the following instructions executable by the processor: for performing the steps of the object extraction method of any one of the above.
The application also provides an object identification method, which comprises the following steps:
acquiring local characteristic information of an object, comprising: positioning a local characteristic region of the object according to the high-level characteristic information and the prior knowledge of the object; sampling the positioned local characteristic areas respectively to acquire sampling information; respectively extracting the characteristics of the sampling information to obtain local characteristic information of a local characteristic region;
and combining the obtained local characteristic information and the high-level characteristic information as an object identification result.
In one illustrative example, the locating the local feature region of the object includes:
and according to different parts of the object characterized by the prior knowledge, positioning one or more local characteristic regions from the high-level characteristic information of the object.
In an illustrative example, the sampling comprises upsampling or downsampling.
In an exemplary instance, the size of the sampling information map corresponding to the sampling information is smaller than or equal to the size of the high-level feature map corresponding to the high-level feature information.
In an exemplary instance, the obtaining the local feature information of the local feature region includes:
and performing feature extraction on the sampling information graph corresponding to the sampling information by using a neural network model to acquire local feature information of the local feature area.
The present application further provides a pedestrian identification method, including:
the method for acquiring the local characteristic information of the pedestrian image comprises the following steps: positioning a local characteristic region of an object according to high-level characteristic information and priori knowledge of a pedestrian image; sampling the positioned local characteristic areas respectively to acquire sampling information; respectively extracting the characteristics of the sampling information to obtain local characteristic information of a local characteristic region;
and combining the obtained local characteristic information and the high-level characteristic information to serve as a pedestrian recognition result.
The present application further provides a computer-readable storage medium storing computer-executable instructions for performing any of the object identification methods described above.
The application further provides an apparatus for implementing object recognition, comprising a memory and a processor, wherein the memory stores the following instructions executable by the processor: for performing the steps of the object recognition method of any one of the above.
The object extraction method comprises the following steps: positioning a local characteristic region of the object according to the high-level characteristic information and the prior knowledge of the object; sampling the positioned local characteristic areas respectively to acquire sampling information; and respectively extracting the characteristics of the sampling information to obtain the local characteristic information of the local characteristic region. The method and the device have the advantages that better local features are simply and effectively obtained, and accordingly follow-up identification of the object is better guaranteed.
The object identification method comprises the following steps: acquiring local characteristic information of an object, comprising: positioning a local characteristic region of the object according to the high-level characteristic information and the prior knowledge of the object; sampling the positioned local characteristic areas respectively to acquire sampling information; respectively extracting the characteristics of the sampling information to obtain local characteristic information of a local characteristic region; and combining the obtained local characteristic information and the high-level characteristic information as an object identification result. According to the method and the device, richer characteristic information is prepared for the identification of the object, and then the follow-up identification of the object is better guaranteed.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the claimed subject matter and are incorporated in and constitute a part of this specification, illustrate embodiments of the subject matter and together with the description serve to explain the principles of the subject matter and not to limit the subject matter.
FIG. 1 is a flow chart of a method of object extraction according to the present application;
FIG. 2 is a schematic diagram of the structure of the device for extracting the object of the present application;
FIG. 3 is a flow chart of an object recognition method of the present application;
FIG. 4 is a schematic diagram of the structure of the object recognition device of the present application;
fig. 5 is a processing diagram of an embodiment of a local feature extraction and fusion process in an object recognition process according to the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more apparent, embodiments of the present application will be described in detail below with reference to the accompanying drawings. It should be noted that the embodiments and features of the embodiments in the present application may be arbitrarily combined with each other without conflict.
In one exemplary configuration of the present application, a computing device includes one or more processors (CPUs), input/output interfaces, a network interface, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
The steps illustrated in the flow charts of the figures may be performed in a computer system such as a set of computer-executable instructions. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
Fig. 1 is a flowchart of an object extraction method according to the present application, as shown in fig. 1, including:
step 100: and positioning the local characteristic region of the object according to the high-level characteristic information and the prior knowledge of the object.
In one illustrative example, the high-level feature information is: and obtaining characteristic information after extracting the characteristics of the object by using the neural network model. The high-level feature information describes global features of the object. The high-layer feature information has a greater discriminative power than the low-layer feature information and the medium-layer feature information.
The high-level attribute features contain richer semantic information and have stronger robustness to illumination and view angle changes.
Therefore, the method and the device have the advantages that the high-level attribute characteristics are used for guiding the low-level characteristics to realize the positioning of the local characteristic region, the reasonable local characteristic region is ensured to be positioned, and accordingly, the guarantee is provided for effectively improving the identification performance.
Taking a pedestrian re-identification scene as an example, the adopted characteristic information can comprise: low-level visual features, middle-level filter features, and high-level attribute features. The low-level visual features and the combination thereof are feature information commonly used in pedestrian re-identification. The combination of multiple low-level visual features has richer information and better distinguishing capability than the combination of single features, so that the low-level visual features are often combined for pedestrian re-identification. The middle layer filter features refer to feature information extracted from image block combinations with strong distinguishing capability in pedestrian images. The filter reflects special visual patterns of the pedestrian, the visual patterns correspond to different body parts, and the special body structure information of the pedestrian can be effectively expressed. The high-level attribute features refer to human attributes such as clothing style, gender, hair style, personal belongings and the like, belong to soft biological features, and have stronger distinguishing capability than the low-level visual features and the middle-level filter features.
In one exemplary embodiment, a priori knowledge is used to characterize different parts of the object, including one or more pre-derived sub-block information that regionally partitions the object. Such as: taking an image in which the object is a person as an example, the prior knowledge may include: head position information, upper body position information, lower body position information, and the like. The following steps are repeated: taking the image of the bird as the object as an example, the prior knowledge may include: beak, head, wings, tail, claws, etc.
In an exemplary instance, locating the local feature region of the object in this step may include: and according to different parts of the object represented by the prior knowledge, positioning one or more local characteristic regions from the high-level characteristic information of the object. Taking a pedestrian re-identification scene as an example, assuming that the priori knowledge includes information of three regions, namely a head region, an upper body and a lower body, after the processing of the step, a head feature region, an upper body feature region and a lower body feature region of the object can be located, and the three regions are cut out to obtain three local region feature maps, namely a head region feature map, an upper body region feature map and a lower body region feature map.
In an exemplary embodiment, a method such as directly defining a local feature region using human information may also be used.
In the method, an additional training data set is not needed for auxiliary training, an additional model is not needed for assisting in learning or positioning, and a local characteristic region capable of expressing the object characteristics can be positioned only by using simple task-related prior knowledge.
Step 101: and respectively sampling the located local characteristic regions to acquire sampling information.
In an exemplary embodiment, the located local feature areas are up-sampled respectively, so as to achieve the effect of enlarging the local feature map. Through the processing of the step, rational guarantee is provided for obtaining more detailed local characteristic information subsequently.
It should be noted that, according to actual requirements, the sampling in this step may also be downsampling, and what sampling manner is specifically adopted may be determined according to actual situations, and is not used to limit the protection scope of the present application.
In an exemplary embodiment, the size of the sampling information map corresponding to the sampling information is smaller than or equal to the size of the high-level feature map corresponding to the high-level feature information.
In an exemplary embodiment, the localized local feature region may also be sampled using conventional interpolation to achieve the image method.
According to the method and the device, the local characteristic information in expectation is more flexibly obtained by carrying out amplification or reduction processing on the located local characteristic region through sampling.
Step 102: and respectively extracting the characteristics of the sampling information to obtain the local characteristic information of the local characteristic region.
In one illustrative example, the step may include: and performing feature extraction on the sampling information graph corresponding to the sampling information by using the neural network model to acquire local feature information of the local feature area.
The object extraction method can be inserted into any sublayer in any neural network model, such as a convolutional layer and an anti-convolutional layer.
According to the object extraction method, better local features are simply and effectively obtained, and therefore follow-up object identification is better guaranteed.
The present application further provides a computer-readable storage medium having stored thereon computer-executable instructions for performing the object extraction method of any of the above.
The present application further provides a computer device comprising a memory and a processor, wherein the memory stores the steps of the object extraction method of any one of the above.
Fig. 2 is a schematic structural diagram of the device for extracting an object of the present application, as shown in fig. 2, at least including: the device comprises a positioning module, a sampling module and an extraction module; wherein the content of the first and second substances,
the positioning module is used for positioning a local characteristic region of the object according to the high-level characteristic information and the prior knowledge of the object;
the sampling module is used for sampling the positioned local characteristic areas respectively to acquire sampling information;
and the extraction module is used for respectively extracting the characteristics of the sampling information and acquiring the local characteristic information of the local characteristic region.
In an illustrative example, the sampling module is specifically configured to: and respectively carrying out up-sampling on each positioned local characteristic region so as to amplify a local region characteristic map.
In an exemplary embodiment, the extraction module is specifically configured to: and performing feature extraction on the sampling information graph corresponding to the sampling information by using the neural network model to acquire local feature information of the local feature region.
Fig. 3 is a flowchart of an object recognition method according to the present application, as shown in fig. 3, including:
step 300: acquiring local characteristic information of an object, comprising: positioning a local characteristic region of the object according to the high-level characteristic information and the prior knowledge of the object; sampling the positioned local characteristic areas respectively to acquire sampling information; and respectively extracting the characteristics of the sampling information to obtain the local characteristic information of the local characteristic region.
The implementation of this step can refer to the object extraction method shown in fig. 1, and is not described herein again.
Step 301: and combining the obtained local characteristic information and the high-level characteristic information as an object identification result.
In an illustrative example, a 1 x 1 convolutional neural network model may be employed to merge the obtained local feature information and high-level feature information. The specific implementation is based on the local feature extraction method and the object recognition method provided by the present application, and is easy to be implemented by those skilled in the art, and is not used to limit the scope of the present application.
The high-level feature information describes the global features of the object, the extracted local features and the global features are fused through the step, the local features and the global features are not independently subjected to subsequent operation processing respectively, richer feature information is prepared for the identification of the object, and the subsequent identification of the object is further better guaranteed. Meanwhile, the extraction of the local features in the object recognition does not need an additional training data set for auxiliary training, does not need to train an additional model for assisting in learning or positioning, and can position the local feature region capable of expressing the object features only by utilizing simple task-related prior knowledge; on the other hand, the local characteristic region which is positioned is amplified or reduced through sampling, and expected local characteristic information is more flexibly obtained; thus, better local characteristics are simply and effectively obtained.
The object identification method can be inserted into any sublayer in any neural network model, such as a convolution layer, a deconvolution layer, a full-link layer, a pooling layer and a nonlinear activation layer.
The object identification method can be applied to, but is not limited to, the following methods: pedestrian recognition in urban traffic, pedestrian recognition in unmanned driving scenarios, and the like.
Fig. 4 is a schematic view of a combined structure of the object recognition device of the present application, as shown in fig. 4, including: a local feature extraction unit and a fusion unit; wherein the content of the first and second substances,
a local feature extraction unit configured to acquire local feature information of an object, including: the positioning module is used for positioning a local characteristic region of the object according to the high-level characteristic information and the prior knowledge of the object; the sampling module is used for sampling the positioned local characteristic regions respectively to acquire sampling information; the extraction module is used for respectively extracting the characteristics of the sampling information to obtain the local characteristic information of the local characteristic region;
and the fusion unit is used for combining the acquired local characteristic information and the acquired high-level characteristic information as an object identification result.
In an illustrative example, the sampling module is specifically configured to: and respectively carrying out up-sampling on each positioned local characteristic region so as to amplify a local region characteristic map.
In an exemplary embodiment, the extraction module is specifically configured to: and performing feature extraction on the sampling information graph corresponding to the sampling information by using the neural network model to acquire local feature information of the local feature region.
In an exemplary embodiment, the fusion unit is specifically configured to: and combining the obtained local characteristic information and the high-level characteristic information by adopting a 1 x 1 convolutional neural network model.
Fig. 5 is a processing diagram of an embodiment of a local feature extraction and fusion process in an object recognition process according to the present application, where a pedestrian re-recognition scene is taken as an example in the embodiment, as shown in fig. 5, the processing diagram includes:
firstly, according to high-level feature information obtained by extracting features of an object, namely an image or a video, by using a neural network model and prior knowledge of different parts representing the object, carrying out region division on a high-level feature map according to regions of the prior knowledge, and positioning: a head feature region, an upper body feature region, and a lower body feature region; in this embodiment, three regions of a priori knowledge head, upper body and lower body are assumed;
then, respectively carrying out up-sampling on the three positioned local characteristic areas to obtain a head characteristic diagram, an upper body characteristic diagram and a lower body characteristic diagram which have the same scale as that of the high-level characteristic diagram;
then, respectively extracting the characteristics of the head characteristic diagram, the upper body characteristic diagram and the lower body characteristic diagram to obtain head characteristic information, upper body characteristic information and lower body characteristic information;
and finally, combining the obtained local characteristic information and the high-level characteristic information by adopting a 1 x 1 convolutional neural network model to obtain an object identification result.
In the embodiment of pedestrian re-identification, the problem that more useful local feature information cannot be better extracted by using a mask technology when the size of the local feature information in an image is smaller in the related technology is well solved by sampling and amplifying the local feature information; moreover, the local characteristic information in the expectation is obtained more flexibly. Compared with the scheme of carrying out pedestrian recognition by means of human body structure information such as joint points in the related technology, the embodiment of pedestrian re-recognition does not need an additional training data set for auxiliary training, does not need to train an additional model for assisting in learning or positioning, and can position the local characteristic region capable of expressing the object characteristics by only utilizing simple priori knowledge related to tasks, so that better local characteristics are simply and effectively obtained.
Although the embodiments disclosed in the present application are described above, the descriptions are only for the convenience of understanding the present application, and are not intended to limit the present application. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims.

Claims (17)

1. An object extraction method, comprising:
positioning a local characteristic region of the object according to the high-level characteristic information and the prior knowledge of the object;
sampling the positioned local characteristic areas respectively to acquire sampling information;
and respectively extracting the characteristics of the sampling information to obtain the local characteristic information of the local characteristic region.
2. The object extraction method of claim 1, further comprising:
and carrying out feature extraction on the object by using a neural network model to obtain the high-level feature information.
3. The object extraction method according to claim 1, wherein the a priori knowledge is used to characterize different parts of the object, including one or more pre-obtained sub-block information for regionalizing the object.
4. The object extraction method of claim 3, wherein the locating a local feature region of an object comprises:
and according to different parts of the object characterized by the prior knowledge, positioning one or more local characteristic regions from the high-level characteristic information of the object.
5. The object extraction method of claim 1, wherein the sampling comprises up-sampling or down-sampling.
6. The object extraction method according to claim 1, wherein a size of the sampling information map corresponding to the sampling information is smaller than or equal to a size of the high-level feature map corresponding to the high-level feature information.
7. The object extraction method according to claim 1, wherein the acquiring local feature information of the local feature region includes:
and performing feature extraction on the sampling information graph corresponding to the sampling information by using a neural network model to acquire local feature information of the local feature area.
8. A computer-readable storage medium storing computer-executable instructions for performing the object extraction method of any one of claims 1 to 7.
9. An apparatus for implementing object extraction, comprising a memory and a processor, wherein the memory has stored therein the following instructions executable by the processor: steps for performing the object extraction method of any one of claims 1 to 7.
10. An object recognition method, comprising:
acquiring local characteristic information of an object, comprising: positioning a local characteristic region of the object according to the high-level characteristic information and the prior knowledge of the object; sampling the positioned local characteristic areas respectively to acquire sampling information; respectively extracting the characteristics of the sampling information to obtain local characteristic information of a local characteristic region;
and combining the obtained local characteristic information and the high-level characteristic information as an object identification result.
11. The object recognition method of claim 10, wherein the locating a local feature region of an object comprises:
and according to different parts of the object characterized by the prior knowledge, positioning one or more local characteristic regions from the high-level characteristic information of the object.
12. The object identifying method of claim 10, wherein the sampling comprises up-sampling or down-sampling.
13. The object identifying method according to claim 10, wherein a size of the sampling information map corresponding to the sampling information is smaller than or equal to a size of the high-level feature map corresponding to the high-level feature information.
14. The object recognition method according to claim 10, wherein the acquiring local feature information of the local feature region includes:
and performing feature extraction on the sampling information graph corresponding to the sampling information by using a neural network model to acquire local feature information of the local feature area.
15. A pedestrian identification method, comprising:
the method for acquiring the local characteristic information of the pedestrian image comprises the following steps: positioning a local characteristic region of an object according to high-level characteristic information and priori knowledge of a pedestrian image; sampling the positioned local characteristic areas respectively to acquire sampling information; respectively extracting the characteristics of the sampling information to obtain local characteristic information of a local characteristic region;
and combining the obtained local characteristic information and the high-level characteristic information to serve as a pedestrian recognition result.
16. A computer-readable storage medium storing computer-executable instructions for performing the object recognition method of any one of claims 10 to 14.
17. An apparatus for implementing object recognition, comprising a memory and a processor, wherein the memory has stored therein the following instructions executable by the processor: steps for performing the object recognition method of any one of claims 10 to 14.
CN201811453893.XA 2018-11-30 2018-11-30 Object extraction method and device Pending CN111259910A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811453893.XA CN111259910A (en) 2018-11-30 2018-11-30 Object extraction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811453893.XA CN111259910A (en) 2018-11-30 2018-11-30 Object extraction method and device

Publications (1)

Publication Number Publication Date
CN111259910A true CN111259910A (en) 2020-06-09

Family

ID=70950118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811453893.XA Pending CN111259910A (en) 2018-11-30 2018-11-30 Object extraction method and device

Country Status (1)

Country Link
CN (1) CN111259910A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408448A (en) * 2021-06-25 2021-09-17 之江实验室 Method and device for extracting local features of three-dimensional space-time object and identifying object

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330396A (en) * 2017-06-28 2017-11-07 华中科技大学 A kind of pedestrian's recognition methods again based on many attributes and many strategy fusion study

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330396A (en) * 2017-06-28 2017-11-07 华中科技大学 A kind of pedestrian's recognition methods again based on many attributes and many strategy fusion study

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DOUG GRAY 等: "Evaluating appearance models for recognition, reacquisition, and tracking", 《PROCEEDINGS OF THE 10TH INTERNATIONAL WORKSHOP ON PERFORMANCE EVALUATION FOR TRACKING AND SURVEILLANCE》 *
RYAN LAYNE 等: "Person Re-identification by Attributes", 《PROCEEDINGS OF THE 2012 BRITISH MACHINE VISION CONFERENCE. SURREY》 *
李幼蛟 等: "行人再识别技术综述", 《自动化学报》 *
罗建豪 等: "基于深度卷积特征的细粒度图像分类研究综述", 《自动化学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408448A (en) * 2021-06-25 2021-09-17 之江实验室 Method and device for extracting local features of three-dimensional space-time object and identifying object

Similar Documents

Publication Publication Date Title
CN110532955B (en) Example segmentation method and device based on feature attention and sub-upsampling
CN110675407B (en) Image instance segmentation method and device, electronic equipment and storage medium
CN106530320B (en) End-to-end image segmentation processing method and system
CN113343778B (en) Lane line detection method and system based on LaneSegNet
CN109885718B (en) Suspected vehicle retrieval method based on deep vehicle sticker detection
CN111091123A (en) Text region detection method and equipment
US20230005278A1 (en) Lane extraction method using projection transformation of three-dimensional point cloud map
CN111382647B (en) Picture processing method, device, equipment and storage medium
CN112329702A (en) Method and device for rapid face density prediction and face detection, electronic equipment and storage medium
CN110569379A (en) Method for manufacturing picture data set of automobile parts
WO2021088504A1 (en) Road junction detection method and apparatus, neural network training method and apparatus, intelligent driving method and apparatus, and device
CN115376089A (en) Deep learning-based lane line detection method
CN110879972A (en) Face detection method and device
CN111401421A (en) Image category determination method based on deep learning, electronic device, and medium
CN110991414B (en) Traffic element high-precision segmentation method, electronic equipment and storage medium
CN115100469A (en) Target attribute identification method, training method and device based on segmentation algorithm
CN113592720B (en) Image scaling processing method, device, equipment and storage medium
CN111259910A (en) Object extraction method and device
CN116563553B (en) Unmanned aerial vehicle image segmentation method and system based on deep learning
CN117541546A (en) Method and device for determining image cropping effect, storage medium and electronic equipment
CN112634286A (en) Image cropping method and device
CN116798041A (en) Image recognition method and device and electronic equipment
CN112785601B (en) Image segmentation method, system, medium and electronic terminal
CN113947529A (en) Image enhancement method, model training method, component identification method and related equipment
CN113706552A (en) Method and device for generating semantic segmentation marking data of laser reflectivity base map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200609