CN110276397B - Door mechanism-based image feature extraction method, device and system - Google Patents

Door mechanism-based image feature extraction method, device and system Download PDF

Info

Publication number
CN110276397B
CN110276397B CN201910547952.8A CN201910547952A CN110276397B CN 110276397 B CN110276397 B CN 110276397B CN 201910547952 A CN201910547952 A CN 201910547952A CN 110276397 B CN110276397 B CN 110276397B
Authority
CN
China
Prior art keywords
layer
information
features
neural network
gate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910547952.8A
Other languages
Chinese (zh)
Other versions
CN110276397A (en
Inventor
杨茂柯
赵厚龙
李祥泰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Shendong Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shendong Technology Beijing Co ltd filed Critical Shendong Technology Beijing Co ltd
Priority to CN201910547952.8A priority Critical patent/CN110276397B/en
Publication of CN110276397A publication Critical patent/CN110276397A/en
Application granted granted Critical
Publication of CN110276397B publication Critical patent/CN110276397B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

A method, device and system for image processing based on a door mechanism, wherein the method at least comprises the following steps: s10, obtaining a gate G corresponding to the characteristic X of each layer of the neural network; s20, for the characteristic X of each layer of the neural network, using the gate G to enhance the useful information, and calculating the area of the characteristic X with useless information; s30, supplementing the area with useless information in each layer characteristic X by using useful information of other layer characteristics; and S40, fully connecting the characteristics of all layers of the neural network. Useful information can be screened and useless information can be suppressed through a gate mechanism, and all the features can exchange information between every two features in a full connection mode so that all the features have information of different levels. Therefore, enough useful information can be obtained from the features of different layers when final feature fusion is carried out without worrying about the introduction of invalid or even harmful information.

Description

Door mechanism-based image feature extraction method, device and system
Technical Field
The invention relates to the technical field of machine vision, in particular to a method, a device and a system for extracting image features based on a door mechanism.
Background
In deep learning, the goal of image segmentation is to classify each pixel in an image, which requires features with high resolution, high level of semantic information. The existing convolutional neural network needs to have a large enough receptive field in order to obtain high-level semantic features when image segmentation is carried out, and the simplest method for obtaining the large receptive field is to continuously carry out down-sampling, so that the resolution of the features becomes very low when the features have high-level semantic information, and meanwhile, the features have high-resolution low-level features in a shallow layer of the convolutional neural network, but the semantic information is weak. Therefore, in order to obtain features of high resolution and high level semantic information, it is necessary to merge features of different levels.
The common method for feature fusion is to fuse features from high level to low level layer by layer, then predict directly or predict after fusing all features of different levels together, and the corresponding representative work is U-Net and FPN (feature pyramid network). Because the attention points of the features of different levels are different, the features of a shallow level pay attention to detail information and the features of a high level pay attention to semantic information, and different information has larger difference. Both the U-Net and FPN techniques are used for fusing indiscriminate features together, so that useless information is introduced, and the original useful information is lost. Therefore, the selective introduction of all information into all features would be very beneficial for the final prediction.
Disclosure of Invention
The invention mainly solves the problems of feature selection and feature fusion in deep learning. The present invention has the improvement points that the useful information in the features is extracted by a door mechanism, then the useful information is distributed and supplemented among different features in a full connection mode to achieve the purpose of feature fusion, and finally all the features are fused together to predict. The door mechanism can be used for screening useful information and inhibiting useless information, and the full-connection mode can enable all the characteristics to exchange information between every two characteristics, so that all the characteristics have information of different levels. Therefore, enough useful information can be obtained from the features of different layers when the final feature fusion is carried out without worrying about the introduction of invalid or even harmful information.
The invention aims to provide a method, a device and a system for extracting image characteristics based on a door mechanism, and the specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides an image feature extraction method based on a door mechanism, including:
s10, acquiring a gate corresponding to the feature of the image extracted by each layer of the neural network;
s20, for the feature of the image extracted from each layer of the neural network, using gate to enhance the useful information, and calculating the region with useless information of the feature; wherein, a gate function Sigmoid is used as a judging function for judging whether the characteristics are useful or not;
s30, supplementing the area with useless information in the characteristics of the image extracted by each layer by using useful information of other layer characteristics;
and S40, fully connecting the characteristics of all layers of the neural network.
In a second aspect, an embodiment of the present invention provides an image feature extraction apparatus based on a door mechanism, including:
the gate obtaining module is used for obtaining gates corresponding to the features of the images extracted from each layer of the neural network;
the enhancement module is used for enhancing useful information of the features of the image extracted from each layer of the neural network by using a gate and calculating a region of the features with useless information; wherein, a gate function Sigmoid is used as a judging function for judging whether the characteristics are useful or not;
the supplement module is used for supplementing the area with useless information in the image characteristics extracted by each layer by using the useful information of other layer characteristics;
and the full connection module is used for fully connecting the characteristics of all layers of the neural network.
In a third aspect, the invention further provides an image feature extraction system based on a door mechanism, which comprises a memory and a processor, wherein the memory stores instructions; the processor unit is configured to perform the following steps according to instructions stored in the memory:
s10, acquiring a gate corresponding to the feature of the image extracted by each layer of the neural network;
s20, for the feature of the image extracted from each layer of the neural network, using gate to enhance the useful information, and calculating the region with useless information of the feature; wherein, a gate function Sigmoid is used as a judging function for judging whether the characteristics are useful or not;
s30, supplementing the area with useless information in the characteristics of the image extracted by each layer by using useful information of other layer characteristics;
and S40, fully connecting the characteristics of all layers of the neural network.
Based on the technical scheme of the application, useful information in the features is extracted through a door mechanism, then the useful information is distributed and supplemented among different features in a full-connection mode to achieve the purpose of feature fusion, and finally all the features are fused together to make prediction. The door mechanism can be used for screening useful information and inhibiting useless information, and the full-connection mode can enable all the characteristics to exchange information between every two characteristics, so that all the characteristics have information with different levels. Therefore, enough useful information can be obtained from the features of different layers when final feature fusion is carried out without worrying about the introduction of invalid or even harmful information.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart of an image processing method according to an embodiment of the present invention;
fig. 2 is a specific algorithm flow for obtaining useful information of each layer of features according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to solve the problems of feature selection and feature fusion in deep learning, the embodiment of the application provides an image processing method and device based on a door mechanism. Based on the door mechanism, useful information can be screened to inhibit useless information; the full connection mode can enable all the characteristics to carry out information communication between every two characteristics, so that all the characteristics have different levels of information. Therefore, the embodiment of the invention firstly extracts the useful information through a door mechanism, then distributes and supplements the useful information among different characteristics in a full connection mode to achieve the purpose of characteristic fusion, and finally fuses all the characteristics together for prediction.
First, an image processing method based on a door mechanism according to an embodiment of the present invention will be described below.
It should be noted that the image processing method based on the door mechanism provided by the embodiment of the present invention is executed by an image processing apparatus, wherein the image processing apparatus may be independent image processing software in the related art, or may be a functional plug-in the image processing software; in addition, the image processing apparatus may be applied to an electronic device, which is a terminal device and/or a server.
As shown in fig. 1, an embodiment of the present invention provides an image processing method based on a door mechanism, including the following steps:
and S10, acquiring gates corresponding to the characteristics of each layer of the neural network.
In one embodiment, the gating feature G corresponding to the feature X of each layer of the neural network can be obtained by a gate function Sigmoid. First, a gate function Sigmoid is introduced. The gate function Sigmoid is a bounded differentiable real function defined as follows:
Figure GDA0002815246660000031
the Sigmoid function has an output range of (0, 1), and the output may represent a probability, a confidence, and the like. In order to reduce the influence of useless information on feature fusion, in order to reduce the useless information as much as possible when performing feature fusion, whether the information in the features is useful needs to be judged before feature fusion, and in the embodiment of the present application, a gate function Sigmoid is selected as a judgment function for judging whether the features are useful. The specific implementation is to use the feature X as the input of the function Sigmoid, and the output of the function, i.e. the gate G, is a gating feature with a value range between (0, 1), and the gating feature G represents the confidence of the input feature X, and when the confidence is higher, the information in the feature X can be considered to be more useful.
In this way, for neural networksL features X in different levels1,X2,…XLRespectively corresponding L gating characteristics G can be obtained through a gate function Sigmoid1,G2,…GL
S20, for the characteristics of each layer of the neural network, enhancing useful information by using a gate, and simultaneously calculating the area of the characteristics with useless information;
g is a gating characteristic obtained by inputting the characteristic X of each layer into a gate function Sigmoid, and G belongs to (0, 1). Wherein useful information in each layer's features can be used to supplement the features of other layers.
Specifically, X × G weights feature X with gating feature G. Because the gating characteristics G are in one-to-one correspondence with the characteristics X in space, the confidence coefficient of a corresponding point in the characteristics X is lower at a point with a lower value in the gating characteristics G, namely the information of the corresponding point is useless, and the value of the point can be further reduced by using X G, namely the useless information is inhibited; in the gating feature G, the confidence of the corresponding point in the feature X is higher, that is, the information of the corresponding point is useful, and using X × G does not reduce the value of the point too much. Therefore, using X × G, useful information in the features can be extracted, and useless information in the features can be suppressed.
Characteristic X for n-th layer (n is a positive integer less than or equal to L)nIn other words, it is necessary to reinforce the useful information of the user, namely, Xn+GnThe weighting method of (2) is calculated. Since information loss may be caused after weighting, in the embodiment, the information loss is compensated by using residual error concatenation, that is, X is adoptedn+(1+Gn) The weighting method of (2) is calculated. That is, for each feature XnBy the use of Xn+(1+Gn) To enhance its useful information.
At the same time, the characteristic X of each layernAccording to its door GnUseful information is transmitted to other layers, and each layer characteristic is connected with other layer characteristics so as to complement useful information among the layersThe information of (1). #
Characteristic X for n-th layer (n is a positive integer less than or equal to L)nIn other words, after the useful information of the user is strengthened, the useless information can be supplemented. To supplement the feature XnThe useless information is obtained first, and the area with the useless information is obtained. In this embodiment, X may be passedn+(1-Gn) To obtain a characteristic XnAreas with no useful information.
S30, supplementing the area with useless information in each layer of characteristics with useful information of other layer characteristics;
useful information in the features of all other layers can be used in this embodiment to characterize X for that layernThe area with the garbage information is supplemented to obtain the enhancement of the garbage information of the area with the garbage information.
First, in order not to disturb the features of the layer by useless information in other layer features, they are weighted, i.e. X is usedn*GnThe way (c) is weighted. And what needs to be supplemented is XnThe information of the useless information area is utilized, so that the information transmitted by other layer characteristics passes through 1-GnSo that the information is supplemented to the area that needs to be supplemented. I.e. feature XnThe information of the middle useless information area after enhancement is as follows:
(X1*G1+Xz*Gz+Xn-1*Gn-1+Xn+1*Gn+1…XL*GL)*(1-Gn)
that is, the feature X extracted for each layernIn other words, by way of the present embodiment, useful information obtained with the door mechanism can be expressed as:
Xn*(1+Gn)+(X1*G1+X2*G2+Xn1*Gn1+Xn|1*Gn|1…+XL*GL)*(1-Gn)
fig. 2 shows a specific algorithm flow for obtaining useful information based on a door mechanism according to an embodiment of the present invention.
In the above manner, the characteristic X for each layernIt is possible to enhance useful information thereof and suppress useless information.
And S40, fully connecting the characteristics of all layers of the neural network.
The operations of steps S20-S30 are performed on the features of all layers, i.e., the features of all layers are supplemented with useful information from other layers, i.e., the features of all layers are fully connected. And distributing and supplementing useful information among the features of different layers in a full-connection mode to achieve the purpose of feature fusion, and finally fusing all the features together to predict.
Feature X for a layernThe method collects useful information in the characteristics of other layers to the user, and transmits the useful information of the user to the characteristics of other layers, so that the useful information is fully communicated among the layers.
The full connection mode can enable the features of all layers to carry out information communication between every two layers, so that the features of all layers have different levels of information. Therefore, enough useful information can be obtained from the features of different layers when final feature fusion is carried out without worrying about the introduction of invalid or even harmful information.
Therefore, the method and the device can retain useful information, inhibit useless information, simultaneously obtain the area needing information supplement, exchange the information in a full-connection mode, obtain enough useful information from the characteristics of different layers without worrying about introducing invalid or harmful information, supplement other characteristics by using the useful information of each characteristic, enable each characteristic to have the information of different layers, and achieve the purpose of characteristic fusion.
Corresponding to the foregoing method embodiment, an embodiment of the present invention further provides an image processing apparatus, including:
the gate obtaining module is used for obtaining gates corresponding to the characteristics of each layer of the neural network;
the enhancing module is used for enhancing useful information of the characteristics of each layer of the neural network by using the gate and calculating the area of the characteristics with useless information;
the supplement module is used for supplementing the area with the useless information in each layer of characteristics by adopting the useful information of other layer characteristics;
and the full connection module is used for fully connecting the characteristics of all layers of the neural network.
The invention also provides an image processing system based on the door mechanism, which comprises a memory and a processor.
Wherein, the memory is used for storing applications, instructions, modules and data, and the processing unit executes various functional applications (such as the image segmentation device of the present invention) and data processing of the client by running the applications, instructions, modules and data stored in the storage unit. The storage unit mainly comprises an application storage area and a data storage area, wherein the application storage area stores an operating system, application software (such as sound playing software and image playing software) and the like; the data storage area stores data created by use of the client (such as audio data, video data, a phonebook), and the like. The memory includes high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processing unit is a control center of the client and is used for executing the application software and/or the module stored in the storage unit, calling the data stored in the storage unit, and executing various functions of the client and processing the data.
In addition, the client may further include a camera, a microphone, a bluetooth module, a sensor, a power supply, and the like, which are not described herein again.
In an embodiment of the present invention, the memory stores instructions; the processor unit is configured to perform the following steps according to instructions stored in the memory:
s10, acquiring a gate corresponding to the characteristics of each layer of the neural network;
s20, for the characteristics of each layer of the neural network, enhancing useful information by using a gate, and simultaneously calculating the area of the characteristics with useless information;
s30, supplementing the area with useless information in each layer of characteristics with useful information of other layer characteristics;
and S40, fully connecting the characteristics of all layers of the neural network.
In summary, the technical scheme of the application can retain useful information, inhibit useless information, simultaneously obtain the area needing information supplement by using a door mechanism, exchange the information in a full-connection mode, obtain enough useful information from different levels of characteristics without worrying about introducing invalid or harmful information, and supplement other characteristics by using the useful information of each characteristic, so that each characteristic has information of different levels, and the purpose of characteristic fusion is achieved.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the modules and the instructions described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (11)

1. An image feature extraction method based on a door mechanism at least comprises the following steps:
s10, obtaining a gate G corresponding to the feature X of the image extracted from each layer of the neural network;
s20, for the characteristic X of the image extracted from each layer of the neural network, using the gate G to enhance the useful information, and calculating the area of the characteristic X with useless information; wherein, a gate function Sigmoid is used as a judging function for judging whether the characteristics are useful or not;
s30, supplementing the area with useless information in the characteristic X of the image extracted by each layer by using useful information of other layer characteristics;
and S40, fully connecting the characteristics of all layers of the neural network.
2. The method according to claim 1, wherein step S10 includes,
and obtaining a gate G corresponding to the feature X of the image extracted from each layer of the neural network through a gate function Sigmoid.
3. The method according to claim 1 or 2, wherein step S20 includes,
useful information in the features is extracted through X G, and useless information in the features is suppressed.
4. The method according to claim 1 or 2, wherein step S20 includes,
for the features Xn of the image extracted from each layer in the L-layer neural network, useful information is enhanced by utilizing Xn (1+ Gn), wherein Gn is a gating feature corresponding to the features Xn obtained by a gate function Sigmoid, and n is a positive integer less than or equal to L.
5. The method according to claim 1 or 2, wherein step S20 includes,
and obtaining a region with useless information of the feature Xn by Xn (1-Gn) of the image extracted from each layer of the L-layer neural network, wherein Gn is a gating feature corresponding to the feature Xn obtained by the gate function Sigmoid, and n is a positive integer less than or equal to L.
6. The method according to claim 4, wherein step S20 further includes,
and obtaining a region with useless information of the feature Xn by Xn (1-Gn) of the image extracted from each layer of the L-layer neural network, wherein Gn is a gating feature corresponding to the feature Xn obtained by the gate function Sigmoid, and n is a positive integer less than or equal to L.
7. The method according to claim 1, 2 or 6, wherein step S30 comprises,
l features X for images extracted at different layers in a neural network1 , X2 ,… XLRespectively obtaining L gating characteristics G corresponding to the gate functions Sigmoid1 , G2 ,… GLFor the region with useless information in the feature Xn of the image extracted by each layer in the L-layer neural network, the information supplemented by the useful information of the features of other layers is as follows:
(X1*G1+X2*G2 … +Xn-1*Gn-1+Xn+1*Gn+1 … +XL*GL)*(1-Gn) Wherein n is a positive integer less than or equal to L.
8. The method according to claim 7, wherein step S30 further includes,
l features X for images extracted at different layers in a neural network1 , X2 ,… XLRespectively obtaining L gating characteristics G corresponding to the gate functions Sigmoid1 , G2 ,… GLFor the features Xn of the image extracted by each layer in the L-layer neural network, useful information obtained by using the gate mechanism is expressed as:
Xn *(1+Gn)+(X1*G1+X2*G2+Xn-1*Gn-1+Xn+1*Gn+1 … +XL*GL)*(1-Gn) Wherein n is a positive integer less than or equal to L.
9. The method according to claim 1, 2, 6 or 8, wherein step S40 comprises,
after the features of the images extracted by all layers in the L-layer neural network are subjected to the operations of the steps S20-S30, the features of all layers are supplemented with useful information from other layers.
10. An image feature extraction device based on a door mechanism, comprising:
the gate obtaining module is used for obtaining gates corresponding to the features of the images extracted from each layer of the neural network;
the enhancement module is used for enhancing useful information of the features of the image extracted from each layer of the neural network by using a gate and calculating a region of the features with useless information; wherein, a gate function Sigmoid is used as a judging function for judging whether the characteristics are useful or not;
the supplementary module is used for supplementing the area with useless information in the characteristics of the image extracted by each layer by using the useful information of the characteristics of other layers;
and the full connection module is used for fully connecting the characteristics of all layers of the neural network.
11. An image feature extraction system based on a door mechanism comprises a memory and a processor, wherein the memory stores instructions; the processor unit is configured to perform the following steps according to instructions stored in the memory:
s10, acquiring a gate corresponding to the feature of the image extracted by each layer of the neural network;
s20, enhancing useful information of the image features extracted from each layer of the neural network by using a gate, and calculating regions with useless information of the features; wherein, a gate function Sigmoid is used as a judging function for judging whether the characteristics are useful or not;
s30, supplementing the area with useless information in the image characteristics extracted by each layer by using useful information of other layer characteristics;
and S40, fully connecting the characteristics of all layers of the neural network.
CN201910547952.8A 2019-06-24 2019-06-24 Door mechanism-based image feature extraction method, device and system Active CN110276397B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910547952.8A CN110276397B (en) 2019-06-24 2019-06-24 Door mechanism-based image feature extraction method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910547952.8A CN110276397B (en) 2019-06-24 2019-06-24 Door mechanism-based image feature extraction method, device and system

Publications (2)

Publication Number Publication Date
CN110276397A CN110276397A (en) 2019-09-24
CN110276397B true CN110276397B (en) 2021-03-09

Family

ID=67961512

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910547952.8A Active CN110276397B (en) 2019-06-24 2019-06-24 Door mechanism-based image feature extraction method, device and system

Country Status (1)

Country Link
CN (1) CN110276397B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04184686A (en) * 1990-11-20 1992-07-01 Canon Inc Pattern recognizing device
CN102957929A (en) * 2011-08-22 2013-03-06 索尼公司 Video signal processing apparatus, video signal processing method, and computer program
CN106060372A (en) * 2015-04-02 2016-10-26 安讯士有限公司 Method and system for image stabilization
CN107341462A (en) * 2017-06-28 2017-11-10 电子科技大学 A kind of video classification methods based on notice mechanism
CN108389224A (en) * 2018-02-26 2018-08-10 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN108710902A (en) * 2018-05-08 2018-10-26 江苏云立物联科技有限公司 A kind of sorting technique towards high-resolution remote sensing image based on artificial intelligence

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04184686A (en) * 1990-11-20 1992-07-01 Canon Inc Pattern recognizing device
CN102957929A (en) * 2011-08-22 2013-03-06 索尼公司 Video signal processing apparatus, video signal processing method, and computer program
CN106060372A (en) * 2015-04-02 2016-10-26 安讯士有限公司 Method and system for image stabilization
CN107341462A (en) * 2017-06-28 2017-11-10 电子科技大学 A kind of video classification methods based on notice mechanism
CN108389224A (en) * 2018-02-26 2018-08-10 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN108710902A (en) * 2018-05-08 2018-10-26 江苏云立物联科技有限公司 A kind of sorting technique towards high-resolution remote sensing image based on artificial intelligence

Also Published As

Publication number Publication date
CN110276397A (en) 2019-09-24

Similar Documents

Publication Publication Date Title
CN109087380B (en) Cartoon drawing generation method, device and storage medium
CN111062964B (en) Image segmentation method and related device
CN112418292B (en) Image quality evaluation method, device, computer equipment and storage medium
CN114549913B (en) Semantic segmentation method and device, computer equipment and storage medium
CN112232165B (en) Data processing method, device, computer and readable storage medium
CN113239914B (en) Classroom student expression recognition and classroom state evaluation method and device
CN114511576B (en) Image segmentation method and system of scale self-adaptive feature enhanced deep neural network
CN112561028A (en) Method for training neural network model, and method and device for data processing
CN112488923A (en) Image super-resolution reconstruction method and device, storage medium and electronic equipment
CN117576264B (en) Image generation method, device, equipment and medium
US20220335685A1 (en) Method and apparatus for point cloud completion, network training method and apparatus, device, and storage medium
CN111833360A (en) Image processing method, device, equipment and computer readable storage medium
CN111145202B (en) Model generation method, image processing method, device, equipment and storage medium
Ma et al. Forgetting to remember: A scalable incremental learning framework for cross-task blind image quality assessment
CN111709415A (en) Target detection method, target detection device, computer equipment and storage medium
WO2022096944A1 (en) Method and apparatus for point cloud completion, network training method and apparatus, device, and storage medium
WO2024041108A1 (en) Image correction model training method and apparatus, image correction method and apparatus, and computer device
CN110276397B (en) Door mechanism-based image feature extraction method, device and system
CN113674383A (en) Method and device for generating text image
CN112116700A (en) Monocular view-based three-dimensional reconstruction method and device
CN117593619B (en) Image processing method, device, electronic equipment and storage medium
CN116309274B (en) Method and device for detecting small target in image, computer equipment and storage medium
CN112329925B (en) Model generation method, feature extraction method, device and electronic equipment
CN115620013B (en) Semantic segmentation method and device, computer equipment and computer readable storage medium
CN116630868B (en) Video classification method, video classification device, medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220729

Address after: Room 618, 6 / F, building 5, courtyard 15, Kechuang 10th Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing 100176

Patentee after: Xiaomi Automobile Technology Co.,Ltd.

Address before: 100080 soho1219, Zhongguancun, 8 Haidian North 2nd Street, Haidian District, Beijing

Patentee before: SHENDONG TECHNOLOGY (BEIJING) Co.,Ltd.