WO2020192113A1 - 图像处理方法及装置、电子设备和存储介质 - Google Patents
图像处理方法及装置、电子设备和存储介质 Download PDFInfo
- Publication number
- WO2020192113A1 WO2020192113A1 PCT/CN2019/114465 CN2019114465W WO2020192113A1 WO 2020192113 A1 WO2020192113 A1 WO 2020192113A1 CN 2019114465 W CN2019114465 W CN 2019114465W WO 2020192113 A1 WO2020192113 A1 WO 2020192113A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- image feature
- feature
- matrix
- weight
- Prior art date
Links
- 238000003860 storage Methods 0.000 title claims abstract description 28
- 238000003672 processing method Methods 0.000 title claims abstract description 23
- 238000000034 method Methods 0.000 claims abstract description 135
- 230000004927 fusion Effects 0.000 claims abstract description 46
- 238000007499 fusion processing Methods 0.000 claims abstract description 15
- 239000011159 matrix material Substances 0.000 claims description 362
- 238000005457 optimization Methods 0.000 claims description 92
- 238000012545 processing Methods 0.000 claims description 71
- 230000008569 process Effects 0.000 claims description 55
- 230000006870 function Effects 0.000 claims description 37
- 238000001914 filtration Methods 0.000 claims description 30
- 230000004044 response Effects 0.000 claims description 19
- HNWFFTUWRIGBNM-UHFFFAOYSA-N 2-methyl-9,10-dinaphthalen-2-ylanthracene Chemical compound C1=CC=CC2=CC(C3=C4C=CC=CC4=C(C=4C=C5C=CC=CC5=CC=4)C4=CC=C(C=C43)C)=CC=C21 HNWFFTUWRIGBNM-UHFFFAOYSA-N 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 13
- 238000004422 calculation algorithm Methods 0.000 claims description 12
- 238000010586 diagram Methods 0.000 description 16
- 238000004891 communication Methods 0.000 description 11
- 230000001815 facial effect Effects 0.000 description 11
- 238000013528 artificial neural network Methods 0.000 description 9
- 238000000605 extraction Methods 0.000 description 7
- 239000013598 vector Substances 0.000 description 7
- 230000002159 abnormal effect Effects 0.000 description 5
- 230000009471 action Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000003068 static effect Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 229920001690 polydopamine Polymers 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/211—Selection of the most significant subset of features
- G06F18/2113—Selection of the most significant subset of features by ranking or filtering the set of features, e.g. using a measure of variance or of feature cross-correlation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/251—Fusion techniques of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- the present disclosure relates to the field of computer vision, and in particular to an image processing method and device, electronic equipment, and storage medium.
- Feature fusion is one of the important issues in the field of computer vision and intelligent video surveillance.
- face feature fusion has important application significance in many fields, such as being applied to face recognition systems.
- the features of multiple frames of images are directly averaged as the fused features.
- this method is simple, its performance is poor, especially its robustness to outliers.
- the embodiments of the present disclosure provide an image processing method and device, electronic equipment, and storage medium.
- an image processing method including: respectively acquiring image features of multiple images for the same object; and determining a one-to-one correspondence with each of the image features according to the image features of each image Based on the weight coefficient of each image feature, perform feature fusion processing on the image features of the multiple images to obtain the fusion features of the multiple images.
- the determining a weight coefficient corresponding to each image feature one-to-one according to the image feature of each image includes: forming an image feature matrix based on the image feature of each image; The feature matrix performs feature fitting processing to obtain a first weight matrix; and the weight coefficient corresponding to each image feature is determined based on the first weight matrix.
- the performing feature fitting processing on the image feature matrix to obtain the first weight matrix includes: performing feature fitting processing on the image feature matrix by using a regularized linear least squares estimation algorithm , And obtain the first weight matrix when the preset objective function is the minimum value.
- the determining the weight coefficient corresponding to each image feature based on the first weight matrix includes: determining each first weight coefficient included in the first weight matrix as each image feature Corresponding weight coefficient; or, perform first optimization processing on the first weight matrix, and determine each first weight coefficient included in the optimized first weight matrix as the weight coefficient corresponding to each image feature .
- the performing the first optimization process on the first weight matrix includes: determining the fit of each image based on the first weight coefficient of each image feature included in the first weight matrix Image feature, the fitted image feature is the product of the image feature and the corresponding first weight coefficient; the first error between the image feature of each image and the fitted image feature is used to execute the first weight
- the first optimization processing of the matrix obtains the first optimized weight matrix; in response to the difference between the first weight matrix and the first optimized weight matrix satisfying the first condition, the first optimized weight matrix is determined as the optimized weight matrix And, in response to the difference between the first weight matrix and the first optimized weight matrix that does not satisfy the first condition, the first optimized weight matrix is used to obtain new fitted image features, based on The new fitted image feature repeatedly executes the first optimization process until the difference between the obtained kth optimized weight matrix and the k-1th optimized weight matrix satisfies the first condition, and the kth The optimized weight matrix is determined as the optimized first weight matrix, where k is a positive integer greater than 1.
- the using the first error between the image feature of each image and the fitted image feature to perform the first optimization processing of the first weight matrix includes: The sum of the squares of the differences between the corresponding elements in the fitted image feature is obtained to obtain the first error between the image feature and the fitted image feature; the first error of each image feature is obtained based on each of the first errors Two weighting coefficients; the first optimization processing of the first weighting matrix is performed based on the second weighting coefficients of each image to obtain the first optimized weighting matrix corresponding to the first weighting matrix.
- the obtaining the second weight coefficient of each image feature based on each of the first errors includes: obtaining the second weight coefficient of each image feature based on each of the first errors in a first manner , Where the expression of the first mode is:
- w i is the second weight coefficient of the i-th image
- e i represents the first error between the i-th image feature and its corresponding fitted image feature
- i is an integer between 1 and N
- N is the image
- k 1.345 ⁇
- ⁇ is the standard deviation of the error e i .
- the determining the weight coefficients corresponding to each of the image features one-to-one according to the image features of each image further includes: forming an image feature matrix based on the image features of each image; The image feature matrix executes median filtering processing to obtain a median feature matrix; the weight coefficient corresponding to each image feature is determined based on the median feature matrix.
- the performing median filtering processing on the image feature matrix to obtain the median feature matrix includes: determining the median value of each of the image features in the image feature matrix for elements at the same position; The median feature matrix is obtained based on the median value of the element at each position.
- the determining the weight coefficient corresponding to each image feature based on the median feature matrix includes: acquiring a second error between each image feature and the median feature matrix; responding to The second error between the image feature and the median feature matrix satisfies the second condition, and the weight coefficient of the image feature is configured as the first weight, in response to the second error between the image feature and the median feature matrix If the error does not meet the second condition, the second method is used to determine the weight coefficient of the image feature.
- the expression of the second manner is:
- b h is the weight coefficient of the h-th image determined by the second method
- e h is the second error between the image feature of the h-th image and the median feature matrix
- h is an integer value from 1 to N
- N represents the number of images.
- the second condition is:
- MADN median([e 1 ,e 2 ,...e N ])/0.675;
- e h is the second error between the image feature of the hth image and the median feature matrix
- h is an integer value from 1 to N
- N represents the number of images
- K is the judgment threshold
- median represents the median filter function .
- the performing feature fusion processing on the image features of the multiple images based on the weight coefficients of the image features to obtain the fusion features of the multiple images includes: using each image feature And the sum of the products of the corresponding weight coefficients to obtain the fusion feature.
- the method further includes: performing the recognition operation of the same object by using the fusion feature.
- the method before the determining the weight coefficient corresponding to each image feature according to the image feature of each image, the method further includes: obtaining the acquisition of the weight coefficient Mode selection information; determine the acquisition mode of the weight coefficient based on the selection information; perform the determination of the weight corresponding to each image feature based on the image feature of each image based on the determined acquisition mode of the weight coefficient Coefficient; the acquisition mode of the weight coefficient includes the use of feature fitting to obtain the weight coefficient and the use of median filtering to obtain the weight coefficient.
- an image processing device which includes: an acquisition module configured to respectively acquire image features of multiple images for the same object; a determination module configured to obtain image features of each image , Determining the weight coefficients corresponding to each of the image features one-to-one; the fusion module is configured to perform feature fusion processing on the image features of the multiple images based on the weight coefficients of each of the image features to obtain the multiple images Fusion characteristics.
- the determining module includes: a first establishing unit configured to form an image feature matrix based on the image features of each image; a fitting unit configured to perform feature fitting on the image feature matrix Processing to obtain a first weight matrix; a first determining unit configured to determine the weight coefficient corresponding to each image feature based on the first weight matrix.
- the fitting unit is further configured to perform feature fitting processing on the image feature matrix by using a regularized linear least squares estimation algorithm, and obtain the result when the preset objective function is the minimum value.
- the first weight matrix is further configured to perform feature fitting processing on the image feature matrix by using a regularized linear least squares estimation algorithm, and obtain the result when the preset objective function is the minimum value.
- the determining module further includes an optimization unit configured to perform a first optimization process on the first weight matrix; the first determining unit is further configured to include the first weight matrix Each first weight coefficient of is determined as the weight coefficient corresponding to each image feature; or each first weight coefficient included in the optimized first weight matrix is determined as the weight coefficient corresponding to each image feature.
- the optimization unit is further configured to determine the fitted image feature of each image based on the first weight coefficient of each image feature included in the first weight matrix; For the first error between the fitted image features, the first optimization process of the first weight matrix is performed to obtain the first optimized weight matrix; in response to the difference between the first weight matrix and the first optimized weight matrix If the difference satisfies the first condition, the first optimized weight matrix is determined to be the optimized first weight matrix; and, in response to the difference between the first weight matrix and the first optimized weight matrix, the first Condition, use the first optimized weight matrix to obtain a new fitted image feature, and repeatedly execute the first optimization process based on the new fitted image feature until the obtained kth optimized weight matrix and the kth- 1
- the difference between the optimized weight matrices satisfies the first condition, and the k-th optimized weight matrix is determined as the optimized first weight matrix, where k is a positive integer greater than 1; wherein the fitted image feature is The product of the image feature and the corresponding first weight coefficient.
- the optimization unit is further configured to obtain the image feature and the fitted image feature according to the sum of the squares of differences between each image feature and the corresponding element in the fitted image feature The first error between the first error; the second weight coefficient of each image feature is obtained based on each of the first errors; the first optimization process of the first weight matrix is performed based on the second weight coefficient of each image to obtain the first The first optimized weight matrix corresponding to the weight matrix.
- the optimization unit is further configured to obtain a second weight coefficient of each image feature based on each of the first errors in a first manner, wherein the expression of the first manner is:
- w i is the second weight coefficient of the i-th image
- e i represents the first error between the i-th image feature and its corresponding fitted image feature
- i is an integer between 1 and N
- N is the image
- k 1.345 ⁇
- ⁇ is the standard deviation of the error e i .
- the determining module further includes: a second establishing unit configured to form an image feature matrix based on the image features of each image; a filtering unit configured to perform median filtering on the image feature matrix Processing to obtain a median feature matrix; the second determining unit is configured to determine the weight coefficient corresponding to each image feature based on the median feature matrix.
- the filtering unit is further configured to determine the median value of elements in the image feature matrix for the same position of each image feature; and obtain the median feature matrix based on the median value of the elements in each position .
- the second determining unit is further configured to obtain a second error between each image feature and the median feature matrix; in response to the first error between the image feature and the median feature matrix If the two errors meet the second condition, the weight coefficient of the image feature is configured as the first weight; in response to the second error between the image feature and the median feature matrix not meeting the second condition, the second method is used to determine the The weight coefficient of the image feature.
- the expression of the second manner is:
- b h is the weight coefficient of the h-th image determined by the second method
- e h is the second error between the image feature of the h-th image and the median feature matrix
- h is an integer value from 1 to N
- N represents the number of images.
- the second condition is:
- MADN median([e 1 ,e 2 ,...e N ])/0.675;
- e h is the second error between the image feature of the hth image and the median feature matrix
- h is an integer value from 1 to N
- N represents the number of images
- K is the judgment threshold
- median represents the median filter function .
- the fusion module is further configured to obtain the fusion feature by using the sum value of the product of each image feature and the corresponding weight coefficient.
- the device further includes a recognition module configured to perform the recognition operation of the same object by using the fusion feature.
- the device further includes a mode determination module configured to select information about the acquisition mode of the weight coefficient, and determine the acquisition mode of the weight coefficient based on the selection information, and the acquisition of the weight coefficient
- the mode includes obtaining the weight coefficient by means of feature fitting and obtaining the weight coefficient by means of median filtering.
- the determining module is further configured to execute the determination of the weight coefficient corresponding to each image feature based on the determined acquisition mode of the weight coefficient.
- an electronic device including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured to execute any of the first aspect The method described in one item.
- a computer-readable storage medium having computer program instructions stored thereon, and when the computer program instructions are executed by a processor, the method described in any one of the first aspects is implemented .
- the embodiments of the present disclosure can fuse different features of the same object, where the weight coefficient corresponding to each image feature can be determined according to the image features of different images of the same object, and the feature fusion of the image features is performed through the weight coefficient. Different weight coefficients can be determined for each image feature. Therefore, the technical solutions of the embodiments of the present disclosure can improve the accuracy of feature fusion.
- Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure
- Fig. 2 shows a flow chart of determining a method for obtaining weight coefficients in an image processing method according to an embodiment of the present disclosure
- Fig. 3 shows a flowchart of step S20 in an image processing method according to an embodiment of the present disclosure
- Fig. 4 shows a flowchart of performing a first optimization process in an image processing method according to an embodiment of the present disclosure
- Fig. 5 shows a flowchart of step S232 in an image processing method according to an embodiment of the present disclosure
- Fig. 6 shows a flowchart of step S20 in an image processing method according to an embodiment of the present disclosure
- Fig. 7 shows a flowchart of step S203 in an image processing method according to an embodiment of the present disclosure
- Fig. 8 shows a block diagram of an image processing device according to an embodiment of the present disclosure
- FIG. 9 shows a block diagram of an electronic device 800 according to an embodiment of the present disclosure.
- FIG. 10 shows a block diagram of an electronic device 1900 according to an embodiment of the present disclosure.
- the embodiments of the present disclosure provide an image processing method that can perform feature fusion processing of multiple images, which can be applied to any electronic device or server.
- the electronic device may include user equipment (UE, User Equipment) , Mobile devices, cellular phones, cordless phones, personal digital assistants (PDAs, Personal Digital Assistant), handheld devices, computing devices, in-vehicle devices, wearable devices, etc.
- the server may include a local server or a cloud server.
- the image generation method can be implemented by a processor calling computer-readable instructions stored in a memory. The foregoing is only an exemplary description of the device, and is not a specific limitation of the present disclosure. In other embodiments, it may also be implemented by other devices capable of performing image processing.
- Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
- the image processing method includes:
- feature fusion processing can be performed on features of different images of the same object.
- the type of the object may be any type, for example, a person, an animal, a plant, a vehicle, a cartoon character, etc., which are not specifically limited in the embodiment of the present disclosure.
- Different images of the same object can be different images taken in the same scene, or images taken in different scenes, and the embodiment of the present disclosure does not specifically limit the time for acquiring the image, and the time for acquiring each image can be the same. It can also be different.
- the embodiment of the present disclosure may first acquire multiple images of the same object described above.
- the method of acquiring multiple images may include: acquiring multiple images through a camera device, or communicating with other devices, receiving multiple images transmitted by other devices, or reading local or specific network addresses stored
- the foregoing is only an exemplary description, and in other embodiments, multiple images for the same object may be obtained in other ways.
- image features in each image can be extracted separately.
- image features can be extracted by feature extraction algorithms, such as facial feature extraction algorithms, edge feature extraction algorithms, etc., or other feature extraction algorithms can also be used to extract relevant features of the object.
- the embodiments of the present disclosure may also extract image features in each image through a neural network with a feature extraction function.
- the image feature can reflect the feature information of the corresponding image, or reflect the feature information of the object in the image.
- the image feature may be the gray value of each pixel in the image.
- the acquired image feature may be the facial feature of the object.
- each image can be processed by a facial feature extraction algorithm to extract facial features in the image.
- each image may be input to a neural network capable of obtaining facial features in the image, and the facial features of each image can be obtained through the neural network.
- the neural network can be a neural network that can obtain the image features of the image after the training is completed and then perform object recognition in the image.
- the final layer of the neural network can be processed by the convolutional layer (the feature obtained before the classification and recognition) result
- the neural network may be a convolutional neural network.
- corresponding image features can also be obtained through corresponding feature extraction algorithms or neural networks, which are not specifically limited in the embodiments of the present disclosure.
- the embodiment of the present disclosure can determine the weight coefficient of each image feature according to the feature parameter in the image feature of each image, and the weight coefficient may be a value between [0, 1] or other values.
- the embodiment of the present disclosure There is no specific restriction on this. By configuring different weight coefficients for each image feature, the image features with higher accuracy can be highlighted, so that the accuracy of the fused features obtained by the feature fusion processing can be improved.
- S30 Perform feature fusion processing on the image features of the multiple images based on the weight coefficient of each of the image features to obtain the fusion features of the multiple images.
- the manner of performing feature fusion processing may include: obtaining the fusion feature by using the sum value of the product of each image feature and the corresponding weight coefficient.
- the fusion feature of each image feature can be obtained by the following formula:
- G represents a fused feature generation
- i is an integer value between 1 and N
- N represents the number of images
- b i represents the image feature weight coefficient of the i-th image of X i.
- the embodiment of the present disclosure can perform multiplication processing on the image feature and the corresponding weight coefficient, and then perform the addition processing on the multiplication results obtained by each multiplication process, that is, the fusion feature of the embodiment of the present disclosure can be obtained.
- the weight coefficient corresponding to each image feature can be determined according to the feature parameter in the image feature, and the fusion feature of each image can be obtained according to the weight coefficient, instead of simply taking the average value of each image feature to obtain the fusion feature , Improve the accuracy of fusion features, and also has the characteristics of simplicity and convenience.
- the weight coefficient of each image feature can be determined.
- the weight coefficients can be obtained by feature fitting, in other possible implementation manners, the weight coefficients can be obtained by median filtering, or in other implementation manners, also The weight coefficients can be obtained through average value or other processing, which is not specifically limited in the embodiment of the present disclosure.
- a method for obtaining each weight coefficient may be first determined, such as a feature fitting method or a median filtering method.
- Fig. 2 shows a flow chart of determining a manner of obtaining weight coefficients in an image processing method according to an embodiment of the present disclosure. Before the determining the weight coefficient corresponding to each image feature according to the image feature of each image, the method further includes:
- the selection information is the mode selection information for performing the operation of obtaining the weight coefficients.
- the selection information may be the first selection information for obtaining the weight coefficients using the first mode (such as the feature fitting method), or may be the use of The second mode (such as the median filtering method) obtains the second selection information of the weight coefficient.
- it may also include using other modes to obtain the selection information of the weight coefficient, which is not specifically limited in the embodiment of the present disclosure.
- the manner of obtaining the selection information may include receiving input information received by the input component, and determining the selection information based on the input information.
- the input component may include a switch, a keyboard, a mouse, an audio receiving interface, a touch panel, a touch screen, a communication interface, etc.
- the embodiment of the present disclosure does not specifically limit this, as long as it can receive selection information, it can be implemented as the present disclosure. example.
- the corresponding mode information can be obtained according to the received selection information. For example, in the case that the selection information includes the first selection information, it can be determined to use the first mode (the way of feature fitting) to perform the acquisition of the weight coefficient; in the case that the selection information includes the second selection information, it can be determined to use the second selection information.
- the second mode (median filtering method) performs the acquisition of weight coefficients.
- the method of obtaining the weight coefficient corresponding to the selection information can be determined accordingly.
- At least one of the accuracy or the amount of calculation and the calculation speed of the acquisition modes of different weight coefficients may be different.
- the accuracy of the first mode may be higher than the accuracy of the second mode, and the operation speed of the first mode may be lower than the operation speed of the second mode, but this is not a specific limitation of the embodiment of the present disclosure. Therefore, in the embodiments of the present disclosure, the user can select an adaptive mode to perform the acquisition of the weight parameter according to different needs.
- the acquisition operation of the weight information can be performed according to the determined mode.
- the selection of the acquisition mode of the weight coefficient can be realized through the above-mentioned method. Under different requirements, different modes can be used to perform the acquisition of the weight coefficient, which has better applicability.
- Fig. 3 shows a flowchart of step S20 in an image processing method according to an embodiment of the present disclosure, wherein the determining a weight coefficient corresponding to each image feature according to the image feature of each image (step S20) may include :
- the image characteristics of each image can be expressed in the form of feature vectors.
- the dimensions of the image features of the images are the same, and they are all D.
- the image feature matrix X formed according to the image features of each image can be expressed as:
- the image feature matrix composed of each image feature can be obtained.
- the elements of each row in the image feature matrix can be regarded as the image feature of an image, and the corresponding image features of each row are different The image characteristics of the image.
- the elements of each column in the image feature matrix can also be used as the image feature of an image, and the image features corresponding to each column are the image features of different images.
- the embodiment of the present disclosure does not make specific arrangements for the image feature matrix. limited.
- the feature fitting process of the image feature matrix can be performed.
- the embodiment of the present disclosure can use the regularized least-square linear regression algorithm to perform the feature simulation. ⁇ Handle.
- a preset objective function can be set, and the preset objective function is a function related to weight coefficients.
- the preset objective function takes the minimum value, the first weight matrix corresponding to each weight coefficient is determined.
- the dimension of the weight matrix is the same as the number of image features, and the final weight coefficient can be determined according to each element in the first weight matrix.
- the preset objective function expression can be:
- X represents the image feature matrix
- Y represents the observation matrix, which is the same as X
- X T represents X
- the transpose matrix of, ⁇ represents the regularization parameter, The L2norm (standard) regularization term that represents the parameter.
- the generated first weight matrix is a column vector; conversely, if the image feature is a column vector, the generated first weight matrix is a row vector.
- the dimension of the first weight matrix is the same as the image feature or the number of images.
- the embodiment of the present disclosure can determine the value of the first weight matrix b when the above-mentioned objective function is the minimum value. At this time, the final first weight matrix can be obtained, and the expression of the first weight matrix can be:
- the first weight matrix obtained by the feature fitting process can be obtained.
- the feature fitting process of the image feature matrix can also be performed by other feature fitting methods to obtain the corresponding first weight matrix, or different preset objective functions can also be set to perform feature
- the fitting process is not specifically limited in the embodiment of the present disclosure.
- the weight coefficient corresponding to the image feature can be determined according to the obtained first weight matrix.
- each element included in the first weight matrix may be directly used as a weight coefficient, that is, each first weight coefficient included in the first weight matrix may be determined as a weight coefficient corresponding to each image feature .
- optimization processing may be performed on the first weight matrix to obtain the optimized first weight matrix, and the elements in the optimized first weight matrix are used as The weight coefficient of each image feature. That is, the first optimization process may be performed on the first weight matrix, and each first weight coefficient included in the optimized first weight matrix may be determined as the weight coefficient corresponding to each image feature.
- the first optimization processing an abnormal value in the first weight matrix can be detected, and corresponding optimization processing can be performed on the abnormal value to improve the accuracy of the obtained weight matrix.
- Fig. 4 shows a flowchart of performing a first optimization process in an image processing method according to an embodiment of the present disclosure.
- performing a first optimization process on the first weight matrix, and determining each first weight coefficient included in the optimized first weight matrix as the weight coefficient corresponding to each image feature may include:
- S231 Determine a fitted image feature of each image based on the first weight coefficient of each image feature included in the first weight matrix, where the fitted image feature is the product of the image feature and the corresponding first weight coefficient .
- the fitted image feature of each image feature may be obtained first based on the determined first weight matrix.
- the first weight coefficient of each image feature included in the first weight matrix can be multiplied with the corresponding image feature to obtain the fitted image feature of the image feature.
- the first weight may be the weight of the first weight image feature matrix of the i-th image of X i b i multiplied by weight coefficient of the image feature X i, to obtain the image feature fitting X i b i.
- S232 Use the first error between the image feature of each image and the fitted image feature to perform the first optimization process of the first weight matrix to obtain the first optimized weight matrix.
- the first error between the image feature and its corresponding fitted image feature can be obtained.
- the embodiment of the present disclosure can obtain the first error between the image feature and the fitted image feature according to the following formula:
- e i represents the first error between the i-th image feature and its corresponding fitted image feature
- i is an integer between 1 and N
- N is the number of image features
- j is an integer between 1 and D
- D denotes the dimension of each image feature
- X i represents the i-th image feature images
- b i X i represents the i-fitting feature image corresponding to image feature.
- the first error between the image feature and the fitted image feature can also be determined in other ways.
- the average value of the difference between the fitted image feature and the image feature can be directly calculated.
- the embodiment of the present disclosure does not specifically limit the method for determining the first error.
- the first error can be used to perform the first optimization process of the first weight matrix to obtain the first optimized weight matrix.
- the elements in the first optimization weight matrix can also represent the weight coefficients after the first optimization corresponding to each image feature.
- step S233 Determine whether the difference between the first weight matrix and the first optimized weight matrix satisfies the first condition, if the first condition is met, step S234 is executed, and if the first condition is not met, step S235 is executed.
- the first optimization processing result (first optimization weight matrix) of the first weight matrix based on the first error After obtaining the first optimization processing result (first optimization weight matrix) of the first weight matrix based on the first error, it can be determined whether the difference between the first optimization weight matrix and the first weight matrix satisfies the first condition, If the difference satisfies the first condition, it indicates that the first optimization weight matrix does not need to be further optimized, and the first optimization weight matrix can be determined as the final optimization weight matrix obtained by the first optimization process. If the difference between the first optimized weight matrix and the first weight matrix does not satisfy the first condition, then it is necessary to continue the optimization process on the first optimized weight matrix.
- the first condition of the embodiment of the present disclosure may be that the absolute value of the difference between the first optimized weight matrix and the first weight matrix is less than a first threshold, and the first threshold is a preset threshold, which may be less than 1.
- the value of the first threshold can be set according to requirements, and the embodiment of the present disclosure does not specifically limit this, for example, it can be 0.01.
- S234 Determine the first optimized weight matrix as the optimized first weight matrix.
- the first optimized weight matrix is directly determined as the optimization weight matrix obtained by the final first optimization process.
- S235 Use the first optimized weight matrix to obtain a new fitted image feature, and repeatedly execute the first optimization process based on the new fitted image feature until the obtained kth optimized weight matrix and the kth- 1 The difference between the optimized weight matrices satisfies the first condition, and the k-th optimized weight matrix is determined as the optimized first weight matrix, where k is a positive integer greater than 1.
- the difference between the first optimized weight matrix and the first weight matrix obtained by the first optimization processing of the image feature may not be the same. Meet the first condition, for example, if the difference is greater than the first threshold, you can continue to use the weight coefficients in the first optimization weight matrix to obtain the fitted image features of each image feature, and then use the image features and the fitted image features
- the second error between the first optimization process is further performed to obtain the second optimization weight matrix.
- the second optimization weight matrix can be determined as the final optimization result, that is, the weight matrix after optimization; Second, the difference between the optimized weight matrix and the first optimized weight matrix still does not satisfy the first condition.
- the difference between the values satisfies the first condition, and at this time, the k-th optimized weight matrix may be determined as the optimized first weight matrix, where k is a positive integer greater than 1.
- the process of performing the first optimization process and obtaining the optimized first weight matrix according to the first error between the image feature and the fitted image feature can be completed.
- the expression of the iterative function of the first optimization process may be:
- t represents the number of iterations (that is, the number of first optimization processing)
- b (t) represents the first optimization weight matrix obtained from the t-th first optimization processing
- X represents the image feature matrix
- Y represents the observation matrix Same as X
- W (t-1) represents the diagonal matrix of the second weight coefficient w i obtained from the t-1 iteration
- I is the diagonal matrix
- ⁇ represents the regularization parameter. It can be obtained from the foregoing embodiment that the embodiment of the present disclosure may perform optimization processing on the weight matrix by adjusting the second weight coefficient w i every time the first optimization processing is performed.
- FIG. 5 shows a flowchart of step S232 in an image processing method according to an embodiment of the present disclosure.
- the using the first error between the image feature of each image and the fitted image feature to execute the first optimization process of the first weight matrix includes:
- the first error between each image feature and the corresponding fitted image feature can be determined.
- the determination of the first error can refer to the aforementioned expression (5).
- the second weight coefficient of the image feature can be determined according to the value of the first error, and the second weight coefficient is used to perform the first optimization process .
- the second weight coefficient of the corresponding image feature can be determined in the first way, and the expression in the first way can be:
- w i is the second weight coefficient of the i-th image
- e i represents the first error between the i-th image feature and its corresponding fitted image feature
- i is an integer between 1 and N
- N is the image
- the number of features, k 1.345 ⁇ , and ⁇ is the e i standard deviation of the error.
- the k value may be other values.
- the value, such as 0.6, etc., is not a specific limitation of the embodiment of the present disclosure.
- the first error can be compared with the error threshold k. If the first error is less than k, the second weight corresponding to the corresponding image feature can be The coefficient is determined to be a first value, such as 1. If the first error is greater than or equal to k, the second weighting coefficient of the image feature can be determined according to the first error. At this time, the second weighting coefficient can be a second value, k Ratio of absolute error
- S2323 Perform the first optimization process of the first weight matrix based on the second weight coefficients of each image to obtain the first optimized weight matrix.
- the difference between the first optimized weight matrix and the first weight matrix does not satisfy the first condition, after using the weight coefficients in the first weight matrix to obtain new fitted image features, the The first error between the image feature and the new fitted image feature re-determines the second weight coefficient of each image feature, so that the above function iteration is performed according to the new second weight coefficient to obtain the second optimized weight matrix, and so on, The k-th optimized weight matrix corresponding to the k-th first optimization process can be obtained.
- the difference between the k-th optimized weight matrix obtained by the kth first optimization process and the k-1th optimized weight matrix obtained by the k-1th first optimization process can further satisfy the first condition
- the process of obtaining the weight coefficients of image features by feature fitting can be completed, and the weight coefficients obtained by this method have high accuracy and high robustness to abnormal values in the weight coefficients.
- the embodiments of the present disclosure also provide a method for determining the weight coefficient of each image feature by means of median filtering. Compared with the feature fitting method, this method has a smaller computational cost.
- FIG. 6 shows a flowchart of step S20 in an image processing method according to an embodiment of the present disclosure, wherein the weight coefficient corresponding to each image feature is determined according to the image feature of each image (step S20), and Can include:
- S201 Form an image feature matrix based on the image features of each image.
- the embodiment of the present disclosure can form an image feature matrix according to the image features of each image, and the image features of each image can be represented in the form of feature vectors.
- the dimensions of the image features of the images are the same, and they are all D.
- the image feature matrix X formed according to the image features of each image can be expressed as the aforementioned expression (2), namely:
- an image feature matrix composed of each image feature can be obtained.
- the elements of each row in the image feature matrix can be used as image features of an image, and the image features corresponding to each row are image features of different images.
- the elements of each column in the image feature matrix can also be used as the image feature of an image, and the image features corresponding to each column are the image features of different images.
- the embodiment of the present disclosure does not make specific arrangements for the image feature matrix. limited.
- median filtering processing may be performed on the obtained image feature matrix to obtain the median feature matrix corresponding to the image feature matrix.
- the element in the median feature matrix is the median of the image features corresponding to the corresponding elements in the image feature matrix.
- the embodiment of the present disclosure can determine the median value of each image feature in the image feature matrix for the element at the same location; and obtain the median feature matrix based on the median value of the element at each location.
- the image feature matrix of the embodiment of the present disclosure is represented by the aforementioned expression (2), namely: Correspondingly, the median value of the image feature for each same position can be obtained.
- the "position” here refers to the position corresponding to the sequence number of each image feature.
- the first element in each image feature can be (x 11 , x 21 ,..., x N1 ), or the element position is
- the j-th element of j can be (x 1j , x 2j ,..., x Nj ), and the elements at the same position can be determined by the above.
- the median function is a median function, that is, the value of the eigenvalue in [m 1j ,m 2j ,...,m Nj ] in the middle position can be obtained.
- the median value obtained is the middle position ((N+1)/2 ) Image feature value (element value)
- the median value obtained is the average of the two middle element values.
- the median feature matrix corresponding to each image feature in the image feature matrix can be obtained.
- the median value can be used to obtain the weight coefficient of the image feature.
- the second error between each image feature and the median feature matrix can be used, and the weight coefficient of each image feature can be determined according to the second error.
- Fig. 7 shows a flowchart of step S203 in an image processing method according to an embodiment of the present disclosure.
- the determining the weight coefficient corresponding to each image feature based on the median feature matrix includes:
- the sum of the absolute value of the difference between the image feature and the corresponding element in the median feature matrix may be used as the second error between the image feature and the median feature matrix.
- the expression of the second error can be:
- e h is the second error between the image feature X h of the h image and the median feature matrix
- M represents the median feature matrix
- X h represents the image feature of the h image
- h is between 1 and N Integer value between.
- the second error between each image feature and the median feature matrix can be obtained, and then the weight coefficient can be determined by the second error.
- the second condition of the embodiment of the present disclosure may be that the second error is greater than the second threshold, and the second threshold may be a preset value, or it may be determined by the second error between each image feature and the median feature matrix.
- the expression of the second condition may be:
- MADN median([e 1 ,e 2 ,...e N ])/0.675 (10)
- e h is the second error between the image feature of the h-th image and the median feature matrix
- h is an integer value from 1 to N
- N is the number of images
- K is the judgment threshold, which can be achieved
- the set value for example, can be 0.8, but it is not a limitation of the embodiment of the present disclosure, and median represents a median filter function. That is, the second threshold in the embodiment of the present disclosure may be the product of the ratio of the mean value of the second error corresponding to each image feature to 0.675 and the judgment threshold K, and the judgment threshold may be a positive number less than 1.
- the second error between the image feature and the median feature matrix satisfies the second condition, for example, the second error is greater than the second threshold. At this time, it indicates that the image feature may be abnormal, and the first The weight is determined as the weight coefficient of the image feature.
- the first weight value in the embodiment of the present disclosure may be a preset weight coefficient, for example, it may be 0, or in other embodiments, the first weight value may also be set to other values to reduce possible abnormal situations The influence of image features on fusion features.
- S2034 Use the second method to determine the weight coefficient of the image feature.
- the second error between the image feature and the median feature matrix when the second error between the image feature and the median feature matrix does not satisfy the second condition, for example, the second error is less than or equal to the second threshold. In this case, it indicates that the image feature is relatively accurate.
- the weight coefficient of the image feature will be determined based on the second error in a second manner.
- the expression of the second mode may be:
- b h is the weight coefficient of the h-th image determined by the second method
- e h is the second error between the image feature of the h-th image and the median feature matrix
- h is an integer value from 1 to N
- N represents the number of images.
- the weight coefficient b h of the image feature can be obtained through the second method described above.
- the weight coefficients of each image feature can be obtained by median filtering.
- the median filtering method to determine the weight coefficients can further reduce the computational overhead, and can effectively reduce the complexity of calculation and processing.
- the accuracy of the obtained fusion features can also be improved.
- feature fusion processing can be performed. For example, the sum of the product of each image feature and the corresponding weight coefficient can be used to obtain the fusion feature.
- the embodiment of the present disclosure may also use the fusion feature to perform a recognition operation of the target object in the image.
- the fusion feature can be compared with the image of each object stored in the database. If there is a first image with a similarity greater than the similarity threshold, the target object can be determined as the object corresponding to the first image, thereby completing identity recognition , The operation of target recognition.
- other types of object recognition operations may also be performed, which is not specifically limited in the present disclosure.
- the embodiment of the present disclosure may first obtain different face images about the object A, for example, N face images may be obtained, where N is an integer greater than 1.
- N an integer greater than 1.
- the weight coefficient corresponding to each facial feature can be determined.
- the weight coefficient may be obtained by means of feature fitting, or the weight coefficient may be obtained by means of median filtering, which may be specifically determined according to the received selection information.
- the sum of the product between the weight coefficient and the image feature can be used to obtain the fusion feature.
- the fusion feature can be further used to perform operations such as target detection and target recognition.
- the embodiments of the present disclosure can fuse different features of the same object.
- the weight coefficient corresponding to each image feature can be determined according to the image features of different images of the same object, and the image feature can be performed by the weight coefficient.
- Feature fusion this method can improve the accuracy of feature fusion.
- the writing order of the steps does not mean a strict execution order but constitutes any limitation on the implementation process.
- the specific execution order of each step should be based on its function and possibility.
- the inner logic is determined.
- the present disclosure also provides image processing devices, electronic equipment, computer-readable storage media, and programs, all of which can be used to implement any image processing method provided in the present disclosure.
- image processing devices electronic equipment, computer-readable storage media, and programs, all of which can be used to implement any image processing method provided in the present disclosure.
- Fig. 8 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
- the image processing apparatus of an embodiment of the present disclosure may include:
- the obtaining module 10 is configured to obtain image features of multiple images of the same object respectively;
- the determining module 20 is configured to determine the weight coefficient corresponding to each image feature one-to-one according to the image feature of each image;
- the fusion module 30 is configured to perform feature fusion processing on the image features of the multiple images based on the weight coefficients of each of the image features to obtain the fusion features of the multiple images.
- the determining module 20 includes:
- the first establishing unit is configured to form an image feature matrix based on the image features of each image
- a fitting unit configured to perform feature fitting processing on the image feature matrix to obtain a first weight matrix
- the first determining unit is configured to determine the weight coefficient corresponding to each image feature based on the first weight matrix.
- the fitting unit is further configured to perform feature fitting processing on the image feature matrix by using a regularized linear least squares estimation algorithm, and obtain the result when the preset objective function is the minimum value.
- the first weight matrix is further configured to perform feature fitting processing on the image feature matrix by using a regularized linear least squares estimation algorithm, and obtain the result when the preset objective function is the minimum value.
- the determining module 20 further includes an optimization unit configured to perform a first optimization process on the first weight matrix
- the first determining unit is further configured to determine each first weight coefficient included in the first weight matrix as the weight coefficient corresponding to each image feature; or to determine each first weight coefficient included in the optimized first weight matrix A weight coefficient is determined as the weight coefficient corresponding to each image feature.
- the optimization unit is further configured to determine the fitted image feature of each image based on the first weight coefficient of each image feature included in the first weight matrix; For the first error between the fitted image features, the first optimization process of the first weight matrix is performed to obtain the first optimized weight matrix; in response to the difference between the first weight matrix and the first optimized weight matrix If the difference satisfies the first condition, the first optimization weight matrix is determined to be the optimized first weight matrix, and in response to the difference between the first weight matrix and the first optimization weight matrix does not satisfy the first condition , Using the first optimized weight matrix to obtain a new fitted image feature, and repeatedly execute the first optimization process based on the new fitted image feature until the obtained k-th optimized weight matrix and the k-1 The difference between the optimized weight matrices satisfies the first condition, and the k-th optimized weight matrix is determined as the optimized first weight matrix, where k is a positive integer greater than 1; wherein, the fitted image features are all The product of the image feature and the corresponding first weight coefficient.
- the optimization unit is further configured to obtain the image feature and the fitted image feature according to the sum of the squares of differences between each image feature and the corresponding element in the fitted image feature The first error between the first error; the second weight coefficient of each image feature is obtained based on each of the first errors; the first optimization process of the first weight matrix is performed based on the second weight coefficient of each image to obtain the first The first optimized weight matrix corresponding to the weight matrix.
- the optimization unit is further configured to obtain a second weight coefficient of each image feature based on each of the first errors in a first manner, wherein the expression of the first manner is:
- w i is the second weight coefficient of the i-th image
- e i represents the first error between the i-th image feature and its corresponding fitted image feature
- i is an integer between 1 and N
- N is the image
- k 1.345 ⁇
- ⁇ is the standard deviation of the error e i .
- the determining module 20 further includes:
- the second establishing unit is configured to form an image feature matrix based on the image features of each image
- a filtering unit configured to perform median filtering processing on the image feature matrix to obtain a median feature matrix
- the second determining unit is configured to determine the weight coefficient corresponding to each image feature based on the median feature matrix.
- the filtering unit is further configured to determine the median value of elements in the image feature matrix for the same position of each image feature; and obtain the median feature matrix based on the median value of the elements in each position .
- the second determining unit is further configured to obtain a second error between each image feature and the median feature matrix; in response to the first error between the image feature and the median feature matrix If the two errors meet the second condition, the weight coefficient of the image feature is configured as the first weight, and in response to the second error between the image feature and the median feature matrix that does not meet the second condition, the second method is used to determine the The weight coefficient of the image feature.
- the expression of the second manner is:
- b h is the weight coefficient of the h-th image determined by the second method
- e h is the second error between the image feature of the h-th image and the median feature matrix
- h is an integer value from 1 to N
- N represents the number of images.
- the second condition is:
- MADN median([e 1 ,e 2 ,...e N ])/0.675;
- e h is the second error between the image feature of the hth image and the median feature matrix
- h is an integer value from 1 to N
- N represents the number of images
- K is the judgment threshold
- median represents the median filter function .
- the fusion module 30 is further configured to obtain the fusion feature by using the sum value of the product of each image feature and the corresponding weight coefficient.
- the device further includes a recognition module configured to perform the recognition operation of the same object by using the fusion feature.
- the device further includes a mode determination module configured to select information about the acquisition mode of the weight coefficient, and determine the acquisition mode of the weight coefficient based on the selection information, and the acquisition of the weight coefficient
- the mode includes obtaining the weight coefficient by means of feature fitting and obtaining the weight coefficient by means of median filtering.
- the determining module 20 is further configured to execute the determination of the weight coefficient corresponding to each image feature according to the image feature of each image based on the determined acquisition mode of the weight coefficient.
- the functions or modules contained in the device provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments.
- the functions or modules contained in the device provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments.
- brevity, here No longer refer to the description of the above method embodiments.
- the embodiments of the present disclosure also provide a computer-readable storage medium on which computer program instructions are stored, and the computer program instructions implement the above-mentioned method when executed by a processor.
- the computer-readable storage medium may be a non-volatile computer-readable storage medium.
- An embodiment of the present disclosure also provides an electronic device, including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured as the above method.
- the electronic device can be provided as a terminal, server or other form of device.
- FIG. 9 shows a block diagram of an electronic device 800 according to an embodiment of the present disclosure.
- the electronic device 800 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and other terminals.
- the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, And the communication component 816.
- the processing component 802 generally controls the overall operations of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
- the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the foregoing method.
- the processing component 802 may include one or more modules to facilitate the interaction between the processing component 802 and other components.
- the processing component 802 may include a multimedia module to facilitate the interaction between the multimedia component 808 and the processing component 802.
- the memory 804 is configured to store various types of data to support operations in the electronic device 800. Examples of these data include instructions for any application or method operating on the electronic device 800, contact data, phone book data, messages, pictures, videos, etc.
- the memory 804 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM, Static Random Access Memory), electrically erasable programmable read-only memory (EEPROM, Electrically Erasable Programmable Read-Only Memory, Erasable Programmable Read-Only Memory (EPROM, Erasable Programmable Read-Only Memory), Programmable Read-Only Memory (PROM, Programmable Read-Only Memory), Read-Only Memory (ROM, Read Only Memory), magnetic memory, flash memory, magnetic disk or optical disk.
- SRAM static random access memory
- EEPROM Electrically Erasable Programmable Read-Only Memory
- EPROM Erasable Programmable Read-Only Memory
- PROM Programmable Read-Only Memory
- the power supply component 806 provides power for various components of the electronic device 800.
- the power supply component 806 may include a power management system, one or more power supplies, and other components associated with the generation, management, and distribution of power for the electronic device 800.
- the multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
- the screen may include a liquid crystal display (LCD, Liquid Crystal Display) and a touch panel (TP, Touch Panel). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
- the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
- the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
- the audio component 810 is configured to output and/or input audio signals.
- the audio component 810 includes a microphone (MIC, Microphone), and when the electronic device 800 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to receive external audio signals.
- the received audio signal may be further stored in the memory 804 or transmitted via the communication component 816.
- the audio component 810 further includes a speaker for outputting audio signals.
- the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module.
- the peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include but are not limited to: home button, volume button, start button, and lock button.
- the sensor component 814 includes one or more sensors for providing the electronic device 800 with various aspects of state evaluation.
- the sensor component 814 can detect the on/off status of the electronic device 800 and the relative positioning of the components.
- the component is the display and the keypad of the electronic device 800.
- the sensor component 814 can also detect the electronic device 800 or the electronic device 800.
- the position of the component changes, the presence or absence of contact between the user and the electronic device 800, the orientation or acceleration/deceleration of the electronic device 800, and the temperature change of the electronic device 800.
- the sensor component 814 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact.
- the sensor component 814 may also include a light sensor, such as a Complementary Metal-Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor for use in imaging applications.
- CMOS Complementary Metal-Oxide Semiconductor
- CCD Charge Coupled Device
- the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
- the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
- the electronic device 800 can access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof.
- the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
- the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communication.
- the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA, Infrared Data Association) technology, ultra-wideband (UWB, Ultra Wide Band) technology, Bluetooth (BT, BlueTooth) technology and other technologies.
- RFID radio frequency identification
- IrDA infrared data association
- UWB Ultra Wide Band
- Bluetooth Bluetooth
- the electronic device 800 may be used by one or more application specific integrated circuits (ASIC, Application Specific Integrated Circuit), digital signal processor (DSP, Digital Signal Processor), digital signal processing device (DSPD), Programmable logic device (PLD, Programmable Logic Device), field programmable gate array (FPGA, Field-Programmable Gate Array), controller, microcontroller (MCU, Micro Controller Unit), microprocessor (Microprocessor) or other electronic components Implementation, used to perform the above method.
- ASIC Application Specific Integrated Circuit
- DSP Digital Signal Processor
- DSPD digital signal processing device
- PLD Programmable logic device
- FPGA Field-Programmable Gate Array
- controller microcontroller
- MCU Micro Controller Unit
- microprocessor Microprocessor
- a non-volatile computer-readable storage medium such as a memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to complete the foregoing method.
- FIG. 10 shows a block diagram of an electronic device 1900 according to an embodiment of the present disclosure.
- the electronic device 1900 may be provided as a server.
- the electronic device 1900 includes a processing component 1922, which further includes one or more processors, and a memory resource represented by a memory 1932, for storing instructions executable by the processing component 1922, such as application programs.
- the application program stored in the memory 1932 may include one or more modules each corresponding to a set of instructions.
- the processing component 1922 is configured to execute instructions to perform the above-described methods.
- the electronic device 1900 may also include a power supply component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input output (I/O) interface 1958 .
- the electronic device 1900 can operate based on an operating system stored in the memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
- the embodiment of the present disclosure also provides a non-volatile computer-readable storage medium, such as the memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to complete The above method.
- a non-volatile computer-readable storage medium such as the memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to complete The above method.
- the embodiments of the present disclosure may be systems, methods and/or computer program products.
- the computer program product may include a computer-readable storage medium loaded with computer-readable program instructions for enabling a processor to implement various aspects of the embodiments of the present disclosure.
- the computer-readable storage medium may be a tangible device that can hold and store instructions used by the instruction execution device.
- the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- Non-exhaustive list of computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM, Random Access Memory), read-only memory (ROM, Read Only Memory), erasable Programmable Read-Only Memory (EPROM, Erasable Programmable Read-Only Memory) or flash memory, Static Random Access Memory (SRAM, Static Random Access Memory), portable compact disk read-only memory (CD-ROM), digital multi-function disk ( DVD), memory sticks, floppy disks, mechanical encoding devices, such as punch cards on which instructions are stored or raised structures in grooves, and any suitable combination of the above.
- RAM random access memory
- ROM read-only memory
- EPROM Erasable Programmable Read-Only Memory
- flash memory Static Random Access Memory
- SRAM Static Random Access Memory
- CD-ROM compact disk read-only memory
- DVD digital multi-function disk
- memory sticks floppy disks
- mechanical encoding devices such as punch cards on which instructions are stored or raised structures in grooves
- the computer-readable storage medium used here is not interpreted as a transient signal itself, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (for example, light pulses through fiber optic cables), or through wires Transmission of electrical signals.
- the computer-readable program instructions described herein can be downloaded from a computer-readable storage medium to various computing/processing devices, or downloaded to an external computer or external storage device via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
- the network may include copper transmission cables, optical fiber transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
- the network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network, and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device .
- the computer program instructions used to perform the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, status setting data, or in one or more programming languages.
- Source code or object code written in any combination, the programming language includes object-oriented programming languages such as Smalltalk, C++, etc., and conventional procedural programming languages such as "C" language or similar programming languages.
- Computer-readable program instructions can be executed entirely on the user's computer, partly on the user's computer, executed as a stand-alone software package, partly on the user's computer and partly executed on a remote computer, or entirely on the remote computer or server carried out.
- the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to access the Internet connection).
- LAN local area network
- WAN wide area network
- an electronic circuit such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), can be customized by using the status information of the computer-readable program instructions.
- the computer-readable program instructions are executed to realize various aspects of the present disclosure.
- These computer-readable program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device, thereby producing a machine such that when these instructions are executed by the processor of the computer or other programmable data processing device , A device that implements the functions/actions specified in one or more blocks in the flowchart and/or block diagram is produced. It is also possible to store these computer-readable program instructions in a computer-readable storage medium. These instructions make computers, programmable data processing apparatuses, and/or other devices work in a specific manner, so that the computer-readable medium storing instructions includes An article of manufacture, which includes instructions for implementing various aspects of the functions/actions specified in one or more blocks in the flowchart and/or block diagram.
- each block in the flowchart or block diagram may represent a module, program segment, or part of an instruction, and the module, program segment, or part of an instruction contains one or more functions for implementing the specified logical function.
- Executable instructions may also occur in a different order from the order marked in the drawings. For example, two consecutive blocks can actually be executed in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
- each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or actions Or it can be realized by a combination of dedicated hardware and computer instructions.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Electric Double-Layer Capacitors Or The Like (AREA)
- Silver Salt Photography Or Processing Solution Therefor (AREA)
Abstract
Description
Claims (32)
- 一种图像处理方法,包括:分别获取针对同一对象的多个图像的图像特征;根据各图像的图像特征,确定与各所述图像特征一一对应的权重系数;基于各所述图像特征的权重系数,对所述多个图像的图像特征执行特征融合处理,得到所述多个图像的融合特征。
- 根据权利要求1所述的方法,其中,所述根据各图像的图像特征,确定与各所述图像特征一一对应的权重系数,包括:基于各图像的所述图像特征形成图像特征矩阵;对所述图像特征矩阵执行特征拟合处理,得到第一权重矩阵;基于所述第一权重矩阵确定各图像特征对应的所述权重系数。
- 根据权利要求2所述的方法,其中,所述对所述图像特征矩阵执行特征拟合处理,得到第一权重矩阵,包括:利用正则化线性最小二乘估计算法对所述图像特征矩阵执行特征拟合处理,并在预设目标函数为最小值的情况下得到所述第一权重矩阵。
- 根据权利要求2或3所述的方法,其中,所述基于所述第一权重矩阵确定各图像特征对应的所述权重系数,包括:将所述第一权重矩阵中包括的各第一权重系数确定为各图像特征对应的所述权重系数;或者,对所述第一权重矩阵执行第一优化处理,并将优化后的第一权重矩阵中包括的各第一权重系数确定为各图像特征对应的所述权重系数。
- 根据权利要求4所述的方法,其中,所述对所述第一权重矩阵执行第一优化处理,包括:基于所述第一权重矩阵中包括的各图像特征的第一权重系数,确定各图像的拟合图像特征,所述拟合图像特征为所述图像特征与相应的第一权重系数的乘积;利用各图像的图像特征和所述拟合图像特征之间的第一误差,执行所述第一权重矩阵的第一优化处理,得到第一优化权重矩阵;响应于所述第一权重矩阵和第一优化权重矩阵之间的差值满足第一条件,将所述第一优化权重矩阵确定为优化后的所述第一权重矩阵,以及响应于第一权重矩阵和第一优化权重矩阵之间的差值不满足第一条件,利用所述第一优化权重矩阵获得新的拟合图像特征,基于所述新的拟合图像特征重复执行所述第一优化处理,直至得到的第k优化权重矩阵与所述第k-1优化权重矩阵之间的差值满足所述第一条件,将第k优化权重矩阵确定为优化后的第一权重矩阵,其中k为大于1的正整数。
- 根据权利要求5所述的方法,其中,所述利用各图像的图像特征和所述拟合图像特征之间的第一误差,执行所述第一权重矩阵的第一优化处理,包括:根据各图像特征和所述拟合图像特征中相应元素之间的差值的平方和,得到所述图像特征和所述拟合图像特征之间的第一误差;基于各所述第一误差得到各图像特征的第二权重系数;基于各图像的第二权重系数执行所述第一权重矩阵的第一优化处理,得到所述第一权重矩阵对应的第一优化权重矩阵。
- 根据权利要求1-7中任意一项所述的方法,其中,所述根据各图像的图像特征,确定与各所述图像特征一一对应的权重系数,还包括:基于各图像的所述图像特征形成图像特征矩阵;对所述图像特征矩阵执行中值滤波处理,得到中值特征矩阵;基于所述中值特征矩阵确定各图像特征对应的所述权重系数。
- 根据权利要求8所述的方法,其中,所述对所述图像特征矩阵执行中值滤波处理,得到中值特征矩阵,包括:确定所述图像特征矩阵中各所述图像特征针对同一位置的元素中值;基于每个位置的元素中值得到所述中值特征矩阵。
- 根据权利要求8或9所述的方法,其中,所述基于所述中值特征矩阵确定各图像特征对应的所述权重系数,包括:获取各图像特征与所述中值特征矩阵之间的第二误差;响应于图像特征与中值特征矩阵之间的所述第二误差满足第二条件,将该图像特征的权重系数配置为第一权值,响应于图像特征与中值特征矩阵之间的所述第二误差不满足第二条件,利用第二方式确定该图像特征的权重系数。
- 根据权利要求10或11所述的方法,其中,所述第二条件为:e h>K·MADN;MADN=median([e 1,e 2,...e N])/0.675;其中,e h为第h个图像的图像特征与中值特征矩阵之间的第二误差,h为1到N的整数值,N表示图像的数量,K为判断阈值,median表示中值滤波函数。
- 根据权利要求1-12中任意一项所述的方法,其中,所述基于各所述图像特征的权重系数,对所述多个图像的图像特征执行特征融合处理,得到所述多个图像的融合特征,包括:利用各图像特征与对应的权重系数之间的乘积的加和值,得到所述融合特征。
- 根据权利要求1-13中任意一项所述的方法,其中,所述方法还包括:利用所述融合特征执行所述相同对象的识别操作。
- 根据权利要求1-14中任意一项所述的方法,其中,所述根据各图像的图像特征,确定与各所述图像特征对应的权重系数之前,所述方法还包括:获取针对权重系数的获取模式的选择信息;基于所述选择信息确定所述权重系数的获取模式;基于确定的所述权重系数的获取模式,执行所述根据各图像的图像特征,确定与各所述图像特征对应的权重系数;所述权重系数的获取模式包括利用特征拟合的方式获取所述权重系数和利用中值滤波的方式获取所述权重系数。
- 一种图像处理装置,包括:获取模块,配置为分别获取针对同一对象的多个图像的图像特征;确定模块,配置为根据各图像的图像特征,确定与各所述图像特征一一对应的权重系数;融合模块,配置为基于各所述图像特征的权重系数,对所述多个图像的图像特征执行特征融合处理,得到所述多个图像的融合特征。
- 根据权利要求16所述的装置,其中,所述确定模块包括:第一建立单元,配置为基于各图像的所述图像特征形成图像特征矩阵;拟合单元,配置为对所述图像特征矩阵执行特征拟合处理,得到第一权重矩阵;第一确定单元,配置为基于所述第一权重矩阵确定各图像特征对应的所述权重系数。
- 根据权利要求17所述的装置,其中,所述拟合单元还配置为利用正则化线性最小二乘估计算法对所述图像特征矩阵执行特征拟合处理,并在预设目标函数为最小值的情况下得到所述第一权重矩阵。
- 根据权利要求17或18所述的装置,其中,所述确定模块还包括优化单元,配置为对所述第一权重矩阵执行第一优化处理;所述第一确定单元还配置为将所述第一权重矩阵中包括的各第一权重系数确定为各图像特征对应的所述权重系数;或者将优化后的第一权重矩阵中包括的各第一权重系数确定为各图像特征对应的所述权重系数。
- 根据权利要求19所述的装置,其中,所述优化单元还配置为基于所述第一权重矩阵中包括的各图像特征的第一权重系数,确定各图像的拟合图像特征;利用各图像的图像特征和所述拟合图像特征之间的第一误差,执行所述第一权重矩阵的第一优化处理,得到第一优化权重矩阵;响应于所述第一权重矩阵和第一优化权重矩阵之间的差值满足第一条件,将所述第一优化权重矩阵确定为优化后的所述第一权重矩阵;以及,响应于第一权重矩阵和第一优化权重矩阵之间的差值不满足第一条件,利用所述第一优化权重矩阵获得新的拟合图像特征,基于所述新的拟合图像特征重复执行所述第一优化处理,直至得到的第k优化权重矩阵与所述第k-1优化权重矩阵之间的差值满足所述第一条件,将第k优化权重矩阵确定为优化后的第一权重矩阵,其中k为大于1的正整数;其中,所述拟合图像特征为所述图像特征与相应的第一权重系数的乘积。
- 根据权利要求20所述的装置,其中,所述优化单元还配置为根据各图像特征和所述拟合图像特征中相应元素之间的差值的平方和,得到所述图像特征和所述拟合图像特征之间的第一误差;基于各所述第一误差得到各图像特征的第二权重系数;基于各图像的第二权重系数执行所述第一权重矩阵的第一优化处理,得到所述第一权重矩阵对应的第一优化权重矩阵。
- 根据权利要求16-22中任意一项所述的装置,其中,所述确定模块还包括:第二建立单元,配置为基于各图像的所述图像特征形成图像特征矩阵;滤波单元,配置为对所述图像特征矩阵执行中值滤波处理,得到中值特征矩阵;第二确定单元,配置为基于所述中值特征矩阵确定各图像特征对应的所述权重系数。
- 根据权利要求23所述的装置,其中,所述滤波单元还配置为确定所述图像特征矩阵中各所述图像特征针对同一位置的元素中值;基于每个位置的元素中值得到所述中值特征矩阵。
- 根据权利要求23或24所述的装置,其中,所述第二确定单元还配置为获取各图像特征与所述中值特征矩阵之间的第二误差;响应于图像特征与中值特征矩阵之间的所述第二误差满足第二条件,将该图像特征的权重系数配置为第一权值,响应于图像特征与中值特征矩阵之间的所述第二误差不满足第二条件,利用第二方式确定该图像特征的权重系数。
- 根据权利要求25或26所述的装置,所述第二条件为:e h>K·MADN;MADN=median([e 1,e 2,...e N])/0.675;其中,e h为第h个图像的图像特征与中值特征矩阵之间的第二误差,h为1到N的整数值,N表示图像的数量,K为判断阈值,median表示中值滤波函数。
- 根据权利要求16-27中任意一项所述的装置,其中,所述融合模块还配置为利用各图像特征与对应的权重系数之间的乘积的加和值,得到所述融合特征。
- 根据权利要求16-28中任意一项所述的装置,其中,所述装置还包括识别模块,配置为利用所述融合特征执行所述相同对象的识别操作。
- 根据权利要求16-29中任意一项所述的装置,其中,所述装置还包括模式确定模块,配置为针对权重系数的获取模式的选择信息,并基于所述选择信息确定所述权重系数的获取模式,所述权重系数的获取模式包括利用特征拟合的方式获取所述权重系数和利用中值滤波的方式获取所述权重系数;所述确定模块还配置为基于确定的所述权重系数的获取模式,执行所述根据各图像的图像特征,确定与各所述图像特征对应的权重系数。
- 一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为:执行权利要求1至15中任意一项所述的方法。
- 一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现权利要求1至15中任意一项所述的方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020573111A JP7098763B2 (ja) | 2019-03-25 | 2019-10-30 | 画像処理方法及び装置、電子機器、並びに記憶媒体 |
KR1020217002882A KR102389766B1 (ko) | 2019-03-25 | 2019-10-30 | 이미지 처리 방법 및 장치, 전자 기기 및 저장 매체 |
SG11202108147UA SG11202108147UA (en) | 2019-03-25 | 2019-10-30 | Image processing method and apparatus, electronic device, and storage medium |
US17/378,931 US20210342632A1 (en) | 2019-03-25 | 2021-07-19 | Image processing method and apparatus, electronic device, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910228716.XA CN109977860B (zh) | 2019-03-25 | 2019-03-25 | 图像处理方法及装置、电子设备和存储介质 |
CN201910228716.X | 2019-03-25 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/378,931 Continuation US20210342632A1 (en) | 2019-03-25 | 2021-07-19 | Image processing method and apparatus, electronic device, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020192113A1 true WO2020192113A1 (zh) | 2020-10-01 |
Family
ID=67080466
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/114465 WO2020192113A1 (zh) | 2019-03-25 | 2019-10-30 | 图像处理方法及装置、电子设备和存储介质 |
Country Status (7)
Country | Link |
---|---|
US (1) | US20210342632A1 (zh) |
JP (1) | JP7098763B2 (zh) |
KR (1) | KR102389766B1 (zh) |
CN (3) | CN113486830A (zh) |
SG (1) | SG11202108147UA (zh) |
TW (1) | TWI778313B (zh) |
WO (1) | WO2020192113A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112289306A (zh) * | 2020-11-18 | 2021-01-29 | 上海依图网络科技有限公司 | 一种基于人体特征的未成年人识别的方法及装置 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113486830A (zh) * | 2019-03-25 | 2021-10-08 | 上海商汤智能科技有限公司 | 图像处理方法及装置、电子设备和存储介质 |
TWI796072B (zh) * | 2021-12-30 | 2023-03-11 | 關貿網路股份有限公司 | 身分辨識系統、方法及其電腦可讀媒體 |
CN114627431B (zh) * | 2022-02-22 | 2023-07-21 | 安徽新识智能科技有限公司 | 一种基于物联网的环境智能监控方法及系统 |
CN115334239B (zh) * | 2022-08-10 | 2023-12-15 | 青岛海信移动通信技术有限公司 | 前后摄像头拍照融合的方法、终端设备和存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108259774A (zh) * | 2018-01-31 | 2018-07-06 | 珠海市杰理科技股份有限公司 | 图像合成方法、系统和设备 |
US20180204094A1 (en) * | 2015-11-26 | 2018-07-19 | Tencent Technology (Shenzhen) Company Limited | Image recognition method and apparatus |
CN108460365A (zh) * | 2018-03-27 | 2018-08-28 | 百度在线网络技术(北京)有限公司 | 身份认证方法和装置 |
CN109472211A (zh) * | 2018-10-16 | 2019-03-15 | 深圳爱莫科技有限公司 | 人脸识别方法及装置 |
CN109977860A (zh) * | 2019-03-25 | 2019-07-05 | 上海商汤智能科技有限公司 | 图像处理方法及装置、电子设备和存储介质 |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100745981B1 (ko) * | 2006-01-13 | 2007-08-06 | 삼성전자주식회사 | 보상적 특징에 기반한 확장형 얼굴 인식 방법 및 장치 |
KR20120066462A (ko) * | 2010-12-14 | 2012-06-22 | 한국전자통신연구원 | 얼굴 인식 방법 및 시스템, 얼굴 인식을 위한 학습용 특징 벡터 추출 장치 및 테스트용 특징 벡터 추출 장치 |
CN108022252A (zh) * | 2012-01-19 | 2018-05-11 | 索尼公司 | 图像处理设备和方法 |
US9053558B2 (en) * | 2013-07-26 | 2015-06-09 | Rui Shen | Method and system for fusing multiple images |
CN103679144B (zh) * | 2013-12-05 | 2017-01-11 | 东南大学 | 一种基于计算机视觉的复杂环境下果蔬识别方法 |
JP6679373B2 (ja) | 2016-03-28 | 2020-04-15 | パナソニックi−PROセンシングソリューションズ株式会社 | 顔検出装置、顔検出方法及び顔認識システム |
TWI731920B (zh) * | 2017-01-19 | 2021-07-01 | 香港商斑馬智行網絡(香港)有限公司 | 圖像特徵提取方法、裝置、終端設備及系統 |
JP2018120527A (ja) | 2017-01-27 | 2018-08-02 | 株式会社リコー | 画像処理装置、画像処理方法及び画像処理システム |
CN107085833B (zh) * | 2017-04-13 | 2019-07-16 | 长安大学 | 基于梯度倒数自适应开关均中值融合的遥感图像滤波方法 |
CN108304789A (zh) * | 2017-12-12 | 2018-07-20 | 北京深醒科技有限公司 | 脸部识别方法及装置 |
CN108052942B (zh) * | 2017-12-28 | 2021-07-06 | 南京理工大学 | 一种飞机飞行姿态的视觉图像识别方法 |
CN108563767B (zh) * | 2018-04-19 | 2020-11-27 | 深圳市商汤科技有限公司 | 图像检索方法及装置 |
CN109165551B (zh) * | 2018-07-13 | 2021-08-31 | 广东工业大学 | 一种自适应加权融合显著性结构张量和lbp特征的表情识别方法 |
CN109389587B (zh) * | 2018-09-26 | 2021-07-16 | 上海联影智能医疗科技有限公司 | 一种医学图像分析系统、装置及存储介质 |
CN109492560A (zh) * | 2018-10-26 | 2019-03-19 | 深圳力维智联技术有限公司 | 基于时间尺度的人脸图像特征融合方法、装置和存储介质 |
-
2019
- 2019-03-25 CN CN202110796805.1A patent/CN113486830A/zh active Pending
- 2019-03-25 CN CN202110796809.XA patent/CN113537048A/zh active Pending
- 2019-03-25 CN CN201910228716.XA patent/CN109977860B/zh active Active
- 2019-10-30 WO PCT/CN2019/114465 patent/WO2020192113A1/zh active Application Filing
- 2019-10-30 KR KR1020217002882A patent/KR102389766B1/ko active IP Right Grant
- 2019-10-30 JP JP2020573111A patent/JP7098763B2/ja active Active
- 2019-10-30 SG SG11202108147UA patent/SG11202108147UA/en unknown
- 2019-12-19 TW TW108146760A patent/TWI778313B/zh active
-
2021
- 2021-07-19 US US17/378,931 patent/US20210342632A1/en not_active Abandoned
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180204094A1 (en) * | 2015-11-26 | 2018-07-19 | Tencent Technology (Shenzhen) Company Limited | Image recognition method and apparatus |
CN108259774A (zh) * | 2018-01-31 | 2018-07-06 | 珠海市杰理科技股份有限公司 | 图像合成方法、系统和设备 |
CN108460365A (zh) * | 2018-03-27 | 2018-08-28 | 百度在线网络技术(北京)有限公司 | 身份认证方法和装置 |
CN109472211A (zh) * | 2018-10-16 | 2019-03-15 | 深圳爱莫科技有限公司 | 人脸识别方法及装置 |
CN109977860A (zh) * | 2019-03-25 | 2019-07-05 | 上海商汤智能科技有限公司 | 图像处理方法及装置、电子设备和存储介质 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112289306A (zh) * | 2020-11-18 | 2021-01-29 | 上海依图网络科技有限公司 | 一种基于人体特征的未成年人识别的方法及装置 |
CN112289306B (zh) * | 2020-11-18 | 2024-03-26 | 上海依图网络科技有限公司 | 一种基于人体特征的未成年人识别的方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
TWI778313B (zh) | 2022-09-21 |
KR102389766B1 (ko) | 2022-04-22 |
JP2021530047A (ja) | 2021-11-04 |
CN109977860A (zh) | 2019-07-05 |
CN113537048A (zh) | 2021-10-22 |
JP7098763B2 (ja) | 2022-07-11 |
CN109977860B (zh) | 2021-07-16 |
TW202036476A (zh) | 2020-10-01 |
US20210342632A1 (en) | 2021-11-04 |
KR20210024631A (ko) | 2021-03-05 |
SG11202108147UA (en) | 2021-08-30 |
CN113486830A (zh) | 2021-10-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021196401A1 (zh) | 图像重建方法及装置、电子设备和存储介质 | |
WO2021056808A1 (zh) | 图像处理方法及装置、电子设备和存储介质 | |
WO2021164469A1 (zh) | 目标对象的检测方法、装置、设备和存储介质 | |
CN107491541B (zh) | 文本分类方法及装置 | |
WO2020192113A1 (zh) | 图像处理方法及装置、电子设备和存储介质 | |
WO2021008023A1 (zh) | 图像处理方法及装置、电子设备和存储介质 | |
CN110390394B (zh) | 批归一化数据的处理方法及装置、电子设备和存储介质 | |
WO2021031609A1 (zh) | 活体检测方法及装置、电子设备和存储介质 | |
TWI782480B (zh) | 圖像處理方法及電子設備和電腦可讀儲存介質 | |
WO2021012564A1 (zh) | 视频处理方法及装置、电子设备和存储介质 | |
WO2021035812A1 (zh) | 一种图像处理方法及装置、电子设备和存储介质 | |
CN110909815B (zh) | 神经网络训练、图像处理方法、装置及电子设备 | |
CN109214428B (zh) | 图像分割方法、装置、计算机设备及计算机存储介质 | |
WO2021036382A1 (zh) | 图像处理方法及装置、电子设备和存储介质 | |
CN110532956B (zh) | 图像处理方法及装置、电子设备和存储介质 | |
CN109635920B (zh) | 神经网络优化方法及装置、电子设备和存储介质 | |
CN110659690B (zh) | 神经网络的构建方法及装置、电子设备和存储介质 | |
TWI735112B (zh) | 圖像生成方法、電子設備和儲存介質 | |
WO2020147414A1 (zh) | 网络优化方法及装置、图像处理方法及装置、存储介质 | |
TWI719777B (zh) | 圖像重建方法、圖像重建裝置、電子設備和電腦可讀儲存媒體 | |
CN111259967A (zh) | 图像分类及神经网络训练方法、装置、设备及存储介质 | |
TW202141352A (zh) | 字元識別方法及電子設備和電腦可讀儲存介質 | |
CN111242303A (zh) | 网络训练方法及装置、图像处理方法及装置 | |
CN111582383A (zh) | 属性识别方法及装置、电子设备和存储介质 | |
CN111062407B (zh) | 图像处理方法及装置、电子设备和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19920730 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020573111 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20217002882 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 04.02.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19920730 Country of ref document: EP Kind code of ref document: A1 |