CN110633740A - Image semantic matching method, terminal and computer-readable storage medium - Google Patents

Image semantic matching method, terminal and computer-readable storage medium Download PDF

Info

Publication number
CN110633740A
CN110633740A CN201910824888.3A CN201910824888A CN110633740A CN 110633740 A CN110633740 A CN 110633740A CN 201910824888 A CN201910824888 A CN 201910824888A CN 110633740 A CN110633740 A CN 110633740A
Authority
CN
China
Prior art keywords
feature
image
label
labels
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910824888.3A
Other languages
Chinese (zh)
Other versions
CN110633740B (en
Inventor
王健宗
彭俊清
瞿晓阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910824888.3A priority Critical patent/CN110633740B/en
Publication of CN110633740A publication Critical patent/CN110633740A/en
Priority to PCT/CN2020/112352 priority patent/WO2021043092A1/en
Application granted granted Critical
Publication of CN110633740B publication Critical patent/CN110633740B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image semantic matching method, a terminal and a computer readable storage medium, wherein a first feature tag set and a second feature tag set are respectively obtained by analyzing a first image and a second image, each feature tag in the first feature tag set is matched with each feature tag in the second feature tag set, the feature tags shared by the first feature tag set and the second feature tag set are determined, the semantic matching relation between the first image and the second image can be determined by analyzing the feature tags shared by the first feature tag set and the second feature tag set, the unpaired first image and second image respectively obtain a first feature label set and a second feature label set, and whether the first image and the second image have a semantic relation can be determined based on feature labels shared by the first feature label set and the second feature label set.

Description

Image semantic matching method, terminal and computer-readable storage medium
Technical Field
The present invention relates to the field of electronic technologies, and in particular, to an image semantic matching method, a terminal, and a computer-readable storage medium.
Background
Semantic matching, namely establishing semantic correspondence between different object instances or scenes, most semantic matching researches focus on paired images with a certain relationship, and for unpaired images without relationship, no method is used for realizing semantic matching.
Disclosure of Invention
The invention provides an image semantic matching method, a terminal and a computer readable storage medium.
The invention provides an image semantic matching method, which comprises the following steps:
analyzing the first image and the second image to obtain a first feature tag set and a second feature tag set respectively, wherein the first feature tag set comprises feature tags of all main bodies in the first image, and the second feature tag comprises feature tags of all main bodies in the second image;
matching each feature label in the first feature label set with each feature label in the second feature label set, and determining feature labels shared by the first feature label set and the second feature label set;
determining a semantic matching relationship of the first image and the second image based on feature labels common to the first feature label set and the second feature label set.
Optionally, the feature labels include position labels, pixel labels, and area labels of each subject in the image;
the position label is the position of the main body in the image;
the pixel label comprises average brightness, gray scale, hue, tone or color temperature information of the main body;
the area label includes the area of the subject, or the ratio of the subject to the total area of the image.
Optionally, analyzing the first image and the second image to obtain a first feature tag set and a second feature tag set, respectively, includes:
determining a subject in the first image and the second image;
determining feature labels of the subjects in the first image and the second image based on the subjects in the first image and the second image;
and determining the feature labels of all subjects in the first image as the feature label set of the first image, and determining the feature labels of all subjects in the second image as the feature label set of the second image.
Optionally, matching each feature tag in the first feature tag set with each feature tag in the second feature tag set, and determining a feature tag common to the first feature tag set and the second feature tag set, includes:
matching each subject in the first image with each subject in the second image, and determining a subject shared by the first image and the second image;
and comparing the feature labels of the common main body of the first image and the second image to obtain the feature label of the common main body, and taking the feature label as the feature label common to the first feature label set and the second feature label set.
Optionally, matching each feature tag in the first feature tag set with each feature tag in the second feature tag set, and determining a feature tag common to the first feature tag set and the second feature tag set, includes:
classifying each feature label in the first feature label set and each feature label in the second feature label set;
and respectively comparing the feature labels of the same category in the first feature label set and the second feature label set, and determining the feature labels shared by the first feature label set and the second feature label set.
Optionally, determining a semantic matching relationship between the first image and the second image based on feature labels common to the first feature label set and the second feature label set includes:
and when the number of the common feature labels is larger than or equal to the preset label number, judging that the first image and the second image have a semantic relation.
Optionally, determining a semantic matching relationship between the first image and the second image based on feature labels common to the first feature label set and the second feature label set includes:
determining important feature labels in a first feature label set of a first image;
and when the feature labels shared by the first feature label set and the second feature label set comprise important feature labels, judging that the first image and the second image have semantic relation.
Optionally, determining the important feature label in the first feature label set of the first image includes:
determining each feature label of a main body occupying the largest area of the image in the first image as an important feature label;
or determining each feature label of the main body with the maximum brightness in the first image as an important feature label;
alternatively, the feature labels of the subjects of the first image in the fifth of the nine-grid are determined to be important feature labels.
Furthermore, the invention also provides a device, which comprises a processor, a memory and a communication bus;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is used to execute one or more programs stored in the memory to implement the steps of the image semantic matching method as above.
Further, the present invention also provides a computer readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps of the image semantic matching method as above.
The invention provides an image semantic matching method, a terminal and a computer readable storage medium, which can respectively obtain a first feature tag set and a second feature tag set by analyzing an unpaired first image and a second image which have no relation, and can determine whether the first image and the second image have a semantic relation based on feature tags shared by the first feature tag set and the second feature tag set.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a basic flowchart of an image semantic matching method according to an embodiment of the present invention;
FIG. 2 is a detailed flowchart of an image semantic matching method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a method for implementing semantic matching of an image according to an embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a basic flowchart of an image semantic matching method provided in this embodiment, and the method includes:
s101, analyzing the first image and the second image to obtain a first characteristic label set and a second characteristic label set respectively.
In step S101, the first feature label set includes feature labels of respective subjects in the first image, and the second feature labels include feature labels of respective subjects in the second image.
The subject in the image is conspicuous in the image, in contrast to the background, and there may be a plurality of subjects in the image. For example, a picture is shown in an image in which a dog runs on grass, the subject in the image is the running dog, and the background is the grass.
The feature labels may include one or more of a location label, a pixel label, an area label for each subject in the image. Under the example of simultaneously including a position label, a pixel label and an area label, the position label is the position of the main body in the image; the pixel label comprises average brightness, gray scale, hue, tone or color temperature information of a subject in the image; the area label includes the area of the subject, or the ratio of the subject to the total area of the image.
It should be understood that the first image and the second image may be images which have a certain relationship and are paired, or images which have no relationship and are not paired, and whether the two images have a semantic relationship or not can be determined for the images which have a certain relationship and are paired by using the existing semantic matching method; for the unrelated and unpaired images, the existing semantic matching method cannot judge whether the images have the semantic relationship, and based on the semantic relationship, the invention provides the image semantic matching method.
S102, matching each feature label in the first feature label set with each feature label in the second feature label set, and determining feature labels shared by the first feature label set and the second feature label set.
It should be understood that the feature labels in the feature label set are not recorded in a confused manner, but recorded according to different subjects, for example, the first image includes the subject 1 and the subject 2, and the feature label set corresponding to the first image includes the position label, the pixel label, and the area label of the subject 1, and the position label, the pixel label, and the area label of the subject 2.
Therefore, the process of matching and determining common feature labels in the first feature label set and the second feature label set in step S102 may be:
(1) and matching the subjects in the first image and the second image to determine a common subject of the first image and the second image, and matching the feature tags of the common subject to determine the feature tags common to the first feature tag set and the second feature tag set.
(2) Since the feature labels include position labels, pixel labels, and area labels, the feature label sets in the first feature label set and the second feature label set may be classified according to the types of the feature labels, such as the position label set, the pixel label set, and the area label set, and then the position labels in the position label set in the first feature label set and the position labels in the position label set in the second feature label set are respectively recorded and compared, so as to determine the feature labels shared by the first feature label set and the second feature label set.
S103, determining semantic matching relation between the first image and the second image based on the feature labels shared by the first feature label set and the second feature label set.
It should be noted that, whether the first image and the second image have a semantic relationship may be further determined by determining a relationship between the number of the common feature tags and the number of the preset tags; the semantic relationship between the first image and the second image can also be judged by determining that the feature labels shared by the first feature label set and the second feature label set comprise the important feature label of the first image.
The invention provides an image semantic matching method, a terminal and a computer readable storage medium, which can respectively obtain a first feature tag set and a second feature tag set by analyzing an unpaired first image and a second image which have no relation, and can determine whether the first image and the second image have a semantic relation based on feature tags shared by the first feature tag set and the second feature tag set.
Further embodiments of the image semantic matching method provided by the present invention will be described based on the image semantic matching method described above.
Referring to fig. 2, fig. 2 is a flowchart illustrating a refinement of an image semantic matching method according to a second embodiment of the present invention, where the method includes:
s201, determining a main body in the first image and the second image.
The subject in the image is conspicuous in the image, in contrast to the background, and there may be a plurality of subjects in the image.
S202, determining feature labels of the subjects in the first image and the second image based on the subjects in the first image and the second image.
The feature labels may include one or more of a location label, a pixel label, an area label for each subject in the image.
The position label is the position of the main body in the image; the pixel label comprises average brightness, gray scale, hue, tone or color temperature information of a subject in the image; the area label includes the area of the subject, or the ratio of the subject to the total area of the image.
S203, determining the feature labels of all subjects in the first image as a feature label set of the first image, and determining the feature labels of all subjects in the second image as a feature label set of the second image.
It is again noted that the first set of feature labels includes feature labels of respective subjects in the first image, and the second set of feature labels includes feature labels of respective subjects in the second image.
S204, matching each feature label in the first feature label set with each feature label in the second feature label set, and determining feature labels shared by the first feature label set and the second feature label set.
The feature labels in the feature label set are not recorded in a disordered manner, but recorded according to different subjects, for example, the first image includes the subject 1 and the subject 2, and the corresponding first image feature label set includes the position label, the pixel label, and the area label of the subject 1, and the position label, the pixel label, and the area label of the subject 2.
The method for determining the feature tag shared by the first feature tag set and the second feature tag set includes the following two ways, and this embodiment continues to describe the present invention by taking the first as an example.
The first method comprises the following steps:
s20411, matching each subject in the first image with each subject in the second image, and determining a subject common to the first image and the second image.
S20412, comparing the feature labels of the common main body of the first image and the second image to obtain a feature label of the common main body, and using the feature label as a feature label common to the first feature label set and the second feature label set.
And the second method comprises the following steps:
s20421, classifying each feature label in the first feature label set and each feature label in the second feature label set.
It should be understood that the feature labels include three types, i.e., a location label, a pixel label, and an area label, and the location label set, the pixel label set, and the area label set can be obtained by corresponding classification.
S20422, comparing the feature tags in the same category in the first feature tag set and the second feature tag set, respectively, and determining the feature tags shared by the first feature tag set and the second feature tag set.
And respectively recording and comparing the position labels in the position label set in the first characteristic label set and the position labels in the position label set in the second characteristic label set, thereby determining the characteristic labels shared by the first characteristic label set and the second characteristic label set.
S205, determining semantic matching relation between the first image and the second image based on feature labels shared by the first feature label set and the second feature label set.
And when the number of the common feature labels is larger than or equal to the preset label number, judging that the first image and the second image have a semantic relation.
Under further examples, step S205 may also be implemented by two steps:
and S2051, determining important feature labels in the first feature label set of the first image.
It should be understood that the important feature labels are feature label sets corresponding to important subjects in the image, and the important subjects may be subjects occupying the largest area of the image, subjects having the largest brightness, or subjects in the fifth grid of the nine grids.
Therefore, determining the important feature labels in the first feature label set of the first image may be determining that each feature label of the main body occupying the largest area in the first image is the important feature label; or determining each feature label of the main body with the maximum brightness in the first image as an important feature label; the feature labels of subjects in the fifth of the nine-grid may also be determined to be significant feature labels for the first image.
S2052, when the feature label shared by the first feature label set and the second feature label set comprises an important feature label, judging that the first image and the second image have a semantic relation
The invention provides an image semantic matching method, a terminal and a computer readable storage medium, which can respectively obtain a first feature tag set and a second feature tag set by analyzing an unpaired first image and a second image which have no relation, and can determine whether the first image and the second image have a semantic relation based on feature tags shared by the first feature tag set and the second feature tag set.
The present embodiment further provides an apparatus, as shown in fig. 3, which includes a processor 31, a memory 32, and a communication bus 33, wherein:
the communication bus 33 is used for realizing connection communication between the processor 31 and the memory 32;
the processor 31 is configured to execute the image semantic matching program stored in the memory 32 to implement the steps of the image semantic matching method in the above embodiments.
The present embodiment also provides a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps of the image semantic matching method in the above embodiments.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present invention is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no acts or modules are necessarily required of the invention.
In the above embodiments, the description of each embodiment has its own emphasis, and parts of a certain embodiment that are not described in detail can be referred to related descriptions of other embodiments, and the above serial numbers of the embodiments of the present invention are merely for description and do not represent advantages and disadvantages of the embodiments, and those skilled in the art can make many forms without departing from the spirit and scope of the present invention and as claimed in the claims, and these forms are within the protection of the present invention.

Claims (10)

1. An image semantic matching method, characterized in that the method comprises:
analyzing a first image and a second image to obtain a first feature tag set and a second feature tag set respectively, wherein the first feature tag set comprises feature tags of all subjects in the first image, and the second feature tag comprises feature tags of all subjects in the second image;
matching each feature label in the first feature label set with each feature label in the second feature label set, and determining feature labels shared by the first feature label set and the second feature label set;
determining a semantic matching relationship of the first image and the second image based on feature labels common to the first set of feature labels and the second set of feature labels.
2. The image semantic matching method according to claim 1, wherein the feature labels comprise position labels, pixel labels, area labels of each subject in the image;
the position label is the position of the main body in the image;
the pixel label comprises average brightness, gray scale, hue, tone or color temperature information of the main body;
the area label includes the area of the body, or the ratio of the body to the total area of the image.
3. The image semantic matching method according to claim 1, wherein the analyzing the first image and the second image to obtain a first feature tag set and a second feature tag set respectively comprises:
determining a subject in the first image and the second image;
determining feature labels of the subjects in the first image and the second image based on the subjects in the first image and the second image;
and determining the feature label of each subject in the first image as the feature label set of the first image, and determining the feature label of each subject in the second image as the feature label set of the second image.
4. The image semantic matching method according to claim 1, wherein the matching each feature tag in the first set of feature tags with each feature tag in the second set of feature tags, and the determining feature tags common to the first set of feature tags and the second set of feature tags, comprises:
matching each subject in the first image with each subject in the second image, determining a subject common to the first image and the second image;
and comparing the feature labels of the common subjects of the first image and the second image to obtain the feature labels of the common subjects, and taking the feature labels as the feature labels common to the first feature label set and the second feature label set.
5. The image semantic matching method according to claim 1, wherein the matching each feature tag in the first set of feature tags with each feature tag in the second set of feature tags, and the determining feature tags common to the first set of feature tags and the second set of feature tags, comprises:
classifying each feature tag in the first set of feature tags from each feature tag in the second set of feature tags;
and respectively comparing the feature labels of the same category in the first feature label set and the second feature label set, and determining the feature labels shared by the first feature label set and the second feature label set.
6. The image semantic matching method of claim 1, wherein the determining the semantic matching relationship of the first image and the second image based on the feature labels common to the first feature label set and the second feature label set comprises:
and when the number of the common feature labels is larger than or equal to the number of preset labels, judging that the first image and the second image have a semantic relation.
7. The image semantic matching method of claim 1, wherein the determining the semantic matching relationship of the first image and the second image based on the feature labels common to the first feature label set and the second feature label set comprises:
determining an important feature label in a first set of feature labels for the first image;
when the feature labels shared by the first feature label set and the second feature label set comprise the important feature labels, judging that the first image and the second image have semantic relation.
8. The image semantic matching method of claim 7, wherein the determining the significant feature labels in the first set of feature labels of the first image comprises:
determining each feature label of the main body occupying the largest area of the image in the first image as an important feature label;
or determining each feature label of the main body with the maximum brightness in the first image as an important feature label;
or determining that the feature label of each subject of the first image in the fifth of the nine-square grids is an important feature label.
9. An apparatus comprising a processor, a memory, and a communication bus;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute one or more programs stored in the memory to implement the steps of the image semantic matching method according to any one of claims 1 to 8.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores one or more programs which are executable by one or more processors to implement the steps of the image semantic matching method according to any one of claims 1 to 8.
CN201910824888.3A 2019-09-02 2019-09-02 Image semantic matching method, terminal and computer readable storage medium Active CN110633740B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910824888.3A CN110633740B (en) 2019-09-02 2019-09-02 Image semantic matching method, terminal and computer readable storage medium
PCT/CN2020/112352 WO2021043092A1 (en) 2019-09-02 2020-08-31 Image semantic matching method and device, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910824888.3A CN110633740B (en) 2019-09-02 2019-09-02 Image semantic matching method, terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110633740A true CN110633740A (en) 2019-12-31
CN110633740B CN110633740B (en) 2024-04-09

Family

ID=68969962

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910824888.3A Active CN110633740B (en) 2019-09-02 2019-09-02 Image semantic matching method, terminal and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN110633740B (en)
WO (1) WO2021043092A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021043092A1 (en) * 2019-09-02 2021-03-11 平安科技(深圳)有限公司 Image semantic matching method and device, terminal and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102598113A (en) * 2009-06-30 2012-07-18 安芯美特控股有限公司 Method circuit and system for matching an object or person present within two or more images
US8782077B1 (en) * 2011-06-10 2014-07-15 Google Inc. Query image search
CN107067030A (en) * 2017-03-29 2017-08-18 北京小米移动软件有限公司 The method and apparatus of similar pictures detection
CN108304435A (en) * 2017-09-08 2018-07-20 腾讯科技(深圳)有限公司 Information recommendation method, device, computer equipment and storage medium
CN109977253A (en) * 2019-03-29 2019-07-05 哈尔滨工业大学 A kind of fast image retrieval method and device based on semanteme and content
CN110059212A (en) * 2019-03-16 2019-07-26 平安科技(深圳)有限公司 Image search method, device, equipment and computer readable storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109166261B (en) * 2018-10-11 2022-06-07 平安科技(深圳)有限公司 Image processing method, device and equipment based on image recognition and storage medium
CN110633740B (en) * 2019-09-02 2024-04-09 平安科技(深圳)有限公司 Image semantic matching method, terminal and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102598113A (en) * 2009-06-30 2012-07-18 安芯美特控股有限公司 Method circuit and system for matching an object or person present within two or more images
US8782077B1 (en) * 2011-06-10 2014-07-15 Google Inc. Query image search
CN107067030A (en) * 2017-03-29 2017-08-18 北京小米移动软件有限公司 The method and apparatus of similar pictures detection
CN108304435A (en) * 2017-09-08 2018-07-20 腾讯科技(深圳)有限公司 Information recommendation method, device, computer equipment and storage medium
CN110059212A (en) * 2019-03-16 2019-07-26 平安科技(深圳)有限公司 Image search method, device, equipment and computer readable storage medium
CN109977253A (en) * 2019-03-29 2019-07-05 哈尔滨工业大学 A kind of fast image retrieval method and device based on semanteme and content

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021043092A1 (en) * 2019-09-02 2021-03-11 平安科技(深圳)有限公司 Image semantic matching method and device, terminal and computer readable storage medium

Also Published As

Publication number Publication date
CN110633740B (en) 2024-04-09
WO2021043092A1 (en) 2021-03-11

Similar Documents

Publication Publication Date Title
CN108765278B (en) Image processing method, mobile terminal and computer readable storage medium
US10282643B2 (en) Method and apparatus for obtaining semantic label of digital image
US10692133B2 (en) Color estimation device, color estimation method, and color estimation program
US11455831B2 (en) Method and apparatus for face classification
US10635942B2 (en) Method and apparatus for identifying a product
CN110399842B (en) Video processing method and device, electronic equipment and computer readable storage medium
CN108304562B (en) Question searching method and device and intelligent terminal
CN112308802A (en) Image analysis method and system based on big data
CN114241358A (en) Equipment state display method, device and equipment based on digital twin transformer substation
CN111612000A (en) Commodity classification method and device, electronic equipment and storage medium
CN110633740A (en) Image semantic matching method, terminal and computer-readable storage medium
CN111126493B (en) Training method and device for deep learning model, electronic equipment and storage medium
US20170293660A1 (en) Intent based clustering
CN114926464A (en) Image quality inspection method, image quality inspection device and system in double-recording scene
CN112884866B (en) Coloring method, device, equipment and storage medium for black-and-white video
CN111339367B (en) Video processing method and device, electronic equipment and computer readable storage medium
CN114373068A (en) Industry-scene OCR model implementation system, method and equipment
CN109840557B (en) Image recognition method and device
US10832076B2 (en) Method and image processing entity for applying a convolutional neural network to an image
CN113391779A (en) Parameter adjusting method, device and equipment for paper-like screen
CN113449814B (en) Picture level classification method and system
CN110880022A (en) Labeling method, labeling device and storage medium
CN111127310B (en) Image processing method and device, electronic equipment and storage medium
CN114118449B (en) Image label identification method, medium and equipment based on bias label learning model
CN110958489A (en) Video processing method, video processing device, electronic equipment and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant