CN111274427A - Picture processing method and device and computer storage medium - Google Patents

Picture processing method and device and computer storage medium Download PDF

Info

Publication number
CN111274427A
CN111274427A CN202010018567.7A CN202010018567A CN111274427A CN 111274427 A CN111274427 A CN 111274427A CN 202010018567 A CN202010018567 A CN 202010018567A CN 111274427 A CN111274427 A CN 111274427A
Authority
CN
China
Prior art keywords
picture
labeled
tag
label
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010018567.7A
Other languages
Chinese (zh)
Inventor
贾书军
程帅
杨春阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Qinggan Intelligent Technology Co Ltd
Original Assignee
Shanghai Qinggan Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Qinggan Intelligent Technology Co Ltd filed Critical Shanghai Qinggan Intelligent Technology Co Ltd
Priority to CN202010018567.7A priority Critical patent/CN111274427A/en
Publication of CN111274427A publication Critical patent/CN111274427A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a picture processing method, a picture processing device and a computer storage medium, wherein the picture processing method comprises the following steps: acquiring a picture to be marked; taking the picture to be labeled as the input of a preset picture labeling model to obtain a picture label of the picture to be labeled; the image annotation model is obtained by training based on at least one historical image and a corresponding user-defined image tag; and displaying the picture label of the picture to be marked. According to the picture processing method, the picture processing device and the computer storage medium, the picture tags of the pictures to be labeled are predicted through the historical pictures and the corresponding user-defined picture tags, so that personalized tag addition is realized for the pictures, and the use experience of users is improved.

Description

Picture processing method and device and computer storage medium
Technical Field
The present invention relates to the field of image processing, and in particular, to an image processing method, an image processing apparatus, and a computer storage medium.
Background
With the rapid development of science and technology and economy, terminals such as smart phones and tablet computers have been widely used by users. Various pictures including screen shot pictures and self-timer pictures can be stored in the local terminalAnd the like. Meanwhile, in order to facilitate the classification management of the pictures, the terminal can also add different picture labels for different pictures, so that a user can conveniently perform operations such as quick search according to the picture labels. However, the picture tags of the existing pictures are added automatically by the terminal system, so that the personalized setting of the picture tags by the user cannot be realized, and the use experience of the user is influenced
Disclosure of Invention
The invention aims to provide a picture processing method, a picture processing device and a computer storage medium, which can predict picture tags of pictures to be labeled through historical pictures and corresponding user-defined picture tags, realize personalized tag addition to the pictures and improve the user experience.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, where the image processing method includes:
acquiring a picture to be marked;
taking the picture to be labeled as the input of a preset picture labeling model to obtain a picture label of the picture to be labeled; the image annotation model is obtained by training based on at least one historical image and a corresponding user-defined image tag;
and displaying the picture label of the picture to be marked.
As an implementation manner, before the step of using the to-be-labeled picture as an input of a preset picture labeling model to obtain a picture tag corresponding to the to-be-labeled picture, the method includes:
sending a picture labeling model training request to a cloud server, wherein the picture labeling model request comprises at least one historical picture and a corresponding user-defined picture tag;
and receiving a picture marking model which is sent by the cloud server and established by taking the at least one historical picture as the input of the model and taking the corresponding user-defined picture tag as the output of the model.
As an implementation manner, the acquiring a to-be-annotated picture includes:
and receiving a picture selection instruction input by a user, and determining the picture selected by the user as a picture to be annotated.
As one of the implementation modes, the method further comprises the following steps:
receiving a picture label editing instruction, and determining a target picture label of the picture to be labeled according to the picture label editing instruction.
As an implementation manner, the receiving a picture tag editing instruction, and determining a target picture tag of the picture to be annotated according to the picture tag editing instruction includes:
and receiving a picture tag selection instruction input by a user, and determining the picture tag selected by the user as a target picture tag of the picture to be labeled.
As one of the implementation modes, the method further comprises the following steps:
adding the target picture label to the picture to be labeled;
and storing the to-be-labeled picture added with the label to a storage position corresponding to the target picture label according to the set corresponding relation between the picture label and the storage position.
As one of the implementation modes, the method further comprises the following steps:
receiving a picture query instruction;
and inquiring and displaying the picture with the picture tag corresponding to the picture inquiry instruction according to the set corresponding relation between the picture tag and the picture.
As one of the implementation modes, the method further comprises the following steps:
and updating the picture labeling model according to the picture to be labeled and the picture label of the picture to be labeled.
In a second aspect, an embodiment of the present invention provides a picture processing apparatus, which includes a processor and a memory for storing a program; when the program is executed by the processor, the processor is enabled to implement the picture processing method according to the first aspect.
In a third aspect, an embodiment of the present invention provides a computer storage medium, which stores a computer program, and when the computer program is executed by a processor, the image processing method according to the first aspect is implemented.
The image processing method, the image processing device and the computer storage medium provided by the embodiment of the invention comprise the following steps: acquiring a picture to be marked; taking the picture to be labeled as the input of a preset picture labeling model to obtain a picture label of the picture to be labeled; the image annotation model is obtained by training based on at least one historical image and a corresponding user-defined image tag; and displaying the picture label of the picture to be marked. Therefore, the picture tags of the pictures to be labeled are predicted through the historical pictures and the corresponding user-defined picture tags, personalized tag addition to the pictures is achieved, and user experience is improved.
Drawings
Fig. 1 is a schematic flowchart of a picture processing method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating an embodiment of training a picture labeling model;
FIG. 3 is a detailed diagram of the training of the image annotation model according to the embodiment of the present invention;
fig. 4 is a schematic flowchart of a picture processing method according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is further elaborated by combining the drawings and the specific embodiments in the specification. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Referring to fig. 1, for a picture processing method provided in an embodiment of the present invention, the picture processing method may be executed by a picture processing apparatus provided in an embodiment of the present invention, and the picture processing apparatus may be implemented in a software and/or hardware manner, and in a specific application, the picture processing apparatus may specifically be a terminal or a cloud server, and in the embodiment, taking an example in which the picture processing method is applied to a terminal, the picture processing method includes the following steps:
step S101: acquiring a picture to be marked;
here, the terminal acquires the picture to be annotated, which may be a picture acquired by the terminal acquiring a camera device of the terminal, such as a camera, or a picture transmitted by another terminal received by the terminal, or a picture downloaded by the terminal from a third party through a network. In addition, the terminal may also receive a selection instruction of the user, and take the picture selected by the user as the picture to be annotated. In an embodiment, the obtaining the picture to be annotated includes: and receiving a picture selection instruction input by a user, and determining the picture selected by the user as a picture to be annotated. That is to say, the user can select any picture from the terminal as the picture to be annotated, so as to obtain the picture tag of the picture to be annotated.
Step S102: taking the picture to be labeled as the input of a preset picture labeling model to obtain a picture label of the picture to be labeled; the image labeling model is obtained by training based on at least one historical image and a corresponding user-defined image tag.
The terminal can acquire a training sample in advance, the training sample comprises at least one history picture and a corresponding user-defined picture tag, the at least one history picture is used as a model input variable, the corresponding user-defined picture tag is used as a model output variable, and a picture labeling model is established based on the training sample. Here, a model construction algorithm such as a neural network algorithm, a genetic algorithm, etc. may be used to build the image annotation model based on the training samples. The picture tags are used for distinguishing and identifying different types of pictures, for example, the picture tags may include a person tag, a scene tag, an animal tag, a building tag, and the like. Here, the same picture may correspond to multiple tags, for example, if there is a person and a building in a picture, the tags corresponding to the picture may include a person tag and a building tag. It should be noted that the image annotation model may be trained and updated in the terminal or in the cloud server, and in order to train the image annotation model quickly, the image annotation model is trained in the cloud server in this embodiment, for example, in consideration of the fact that the terminal may not have a good computing capability due to the limitation of hardware performance. In an embodiment, before the step of using the to-be-labeled picture as an input of a preset picture labeling model to obtain a picture tag corresponding to the to-be-labeled picture, the method includes: sending a picture labeling model training request to a cloud server, wherein the picture labeling model request comprises at least one historical picture and a corresponding user-defined picture tag; and receiving a picture marking model which is sent by the cloud server and established by taking the at least one historical picture as the input of the model and taking the corresponding user-defined picture tag as the output of the model. The image annotation model can be trained quickly by the cloud server due to the fact that the computing capability of the cloud server is high, and the problems that the terminal is blocked easily when the image annotation model is trained by the terminal can be avoided. In addition, the strong computing capability of the cloud server can ensure that the cloud server uses more training samples, namely historical pictures to train the picture annotation model, so that the prediction accuracy of the picture annotation model is higher. For each historical picture, a user can customize a picture tag for each historical picture according to actual conditions, so that personalized tag definition is realized. It should be noted that, the terminal may encrypt the at least one history picture and then send the encrypted at least one history picture to the cloud server, and accordingly, after the cloud server establishes the picture tagging model based on the at least one history picture, the at least one history picture may be deleted to avoid picture leakage, thereby improving security.
Step S103: and displaying the picture label of the picture to be marked.
Specifically, after the terminal acquires the picture tag of the picture to be labeled, the terminal displays the picture tag of the picture to be labeled so as to add the picture tag to the picture to be labeled.
It should be noted that, compared with the case that the picture tag of the picture to be labeled is acquired through the picture labeling model at the cloud server, the case that the picture tag of the picture to be labeled is acquired through the picture labeling model at the local terminal can ensure that the picture to be labeled is not easy to leak, so that the safety of the picture is ensured, that is, the use safety can be improved.
In summary, in the image processing method provided in the above embodiment, after the terminal acquires the image to be annotated, the image to be annotated is used as an input of a preset image annotation model obtained by training based on at least one history image and a corresponding user-defined image tag, so as to obtain the image tag of the image to be annotated, and the image tag of the image to be annotated is displayed. Therefore, the picture tags of the pictures to be labeled are predicted through the historical pictures and the corresponding user-defined picture tags, personalized tag addition to the pictures is achieved, and user experience is improved.
In an embodiment, the method may further comprise: receiving a picture label editing instruction, and determining a target picture label of the picture to be labeled according to the picture label editing instruction. It can be understood that the picture labels of the pictures to be labeled, which are obtained according to the picture labeling model, may not be labels that the user really needs to label, or there may be a plurality of picture labels of the pictures to be labeled, which are obtained according to the picture labeling model, or the user wants to modify the picture labels of the pictures to be labeled as needed, at this time, after receiving the picture label editing instruction, the terminal determines the target picture labels of the pictures to be labeled according to the picture label editing instruction. Here, the picture tag editing instruction may be a picture tag selection instruction, a picture tag modification instruction, or the like. Therefore, the target picture label of the picture to be marked is determined according to the received picture label editing instruction, so that the user requirement is adapted, and the user experience is further improved. In an embodiment, the receiving a picture tag editing instruction, and determining a target picture tag of the picture to be annotated according to the picture tag editing instruction includes: and receiving a picture tag selection instruction input by a user, and determining the picture tag selected by the user as a target picture tag of the picture to be labeled. It can be understood that there may be a plurality of picture tags of the picture to be labeled, for example, the picture tag of the picture to be labeled includes a person tag and a building tag, and a user only needs to label only one picture tag of the picture to be labeled, at this time, the user may select one of the picture tags as a target picture tag of the picture to be labeled, and correspondingly, the terminal receives a picture tag selection instruction input by the user, and determines the picture tag selected by the user as the target picture tag of the picture to be labeled.
In one embodiment, the method further comprises:
adding the target picture label to the picture to be labeled;
and storing the to-be-labeled picture added with the label to a storage position corresponding to the target picture label according to the set corresponding relation between the picture label and the storage position.
It can be understood that, for pictures with different picture tags, the terminal may be respectively provided with corresponding different storage locations to store pictures with the same picture tag together, so as to facilitate the user to perform operations such as searching. It should be noted that the storage location is used to represent a storage path or a storage space of the picture, and may specifically be a folder or the like. Therefore, the target picture label is added to the picture to be labeled at the terminal, and the picture to be labeled after the label is added is stored to the corresponding storage position, so that the user can conveniently search the picture to be labeled, and the user experience is further improved.
In an embodiment, the method further comprises:
receiving a picture query instruction;
and inquiring and displaying the picture with the picture tag corresponding to the picture inquiry instruction according to the set corresponding relation between the picture tag and the picture.
Here, after receiving the picture query instruction in the voice mode or the text mode, the terminal queries and displays the picture with the picture tag corresponding to the picture query instruction according to the set correspondence between the picture tag and the picture. For example, when the picture tag corresponding to the picture query instruction is a person tag, a picture with the picture tag being the person tag is displayed. Therefore, the picture is quickly searched, and the user experience is further improved.
In one embodiment, the method further comprises:
and updating the picture labeling model according to the picture to be labeled and the picture label of the picture to be labeled.
The more training samples of the image labeling model, the higher the prediction accuracy of the corresponding image labeling model, that is, the more the image label predicted by the image labeling model meets the requirement of the user, so that the image labeling model can be updated according to the image to be labeled and the image label of the image to be labeled, so as to further improve the prediction accuracy of the image labeling model, and further accurately realize personalized label addition of the image.
Based on the same inventive concept of the foregoing embodiments, the present embodiment describes technical solutions of the foregoing embodiments in detail through specific examples. In this embodiment, the terminal is taken as a mobile phone as an example for explanation.
Firstly, labeling the training pictures with labels. Here, the user may define the tags by himself or may use the tags of the system, and the user adds different tags to different pictures, for example, the user may select tens of pictures as training pictures and add tags respectively. In practical application, a user can control the mobile phone to display the original label of the system first and then add or modify the label.
Secondly, training a picture labeling model. Referring to fig. 2, a schematic diagram of training a picture labeling model is shown, and it can be seen from fig. 2 that after a picture is manually labeled by a user, the picture labeling model can be trained on a mobile phone or a cloud server. Referring to fig. 3, a specific schematic diagram of the training of the picture tagging model is shown, taking the example that the picture tagging model is trained on a cloud server, the mobile phone uploads the trained picture and a corresponding tag to the cloud server, the cloud server trains the picture tagging model according to the picture and the corresponding tag, the cloud server finally sends the trained picture tagging model to the mobile phone, and the mobile phone stores the picture tagging model. Referring to fig. 4, a specific flowchart of the image processing method is shown, which includes the following steps:
step S201: selecting a picture to be marked;
here, the user may select a picture as the picture to be annotated.
Step S202: calling an automatic picture marking model to mark a picture to be marked;
specifically, the mobile phone calls an automatic picture labeling model to label the picture to be labeled so as to obtain a label of the picture to be labeled.
Step S203: and displaying the labeling result.
Specifically, the mobile phone displays the label of the picture to be labeled.
Here, for the displayed annotation result, the user may manually modify the annotation result, i.e., modify the label.
In summary, in the image processing method provided in the above embodiment, an image tagging model that can set an image tag for training by itself is provided for a mobile phone user, so that the user has a personalized search engine to search a personal album, thereby providing a personalized service for the user.
Based on the same inventive concept of the foregoing embodiments, an embodiment of the present invention provides an image processing apparatus, which may be a terminal, a cloud server, or the like, and as shown in fig. 5, the apparatus includes: a processor 110 and a memory 111 for storing computer programs capable of running on the processor 110; the processor 110 illustrated in fig. 5 is not used to refer to the number of the processors 110 as one, but is only used to refer to the position relationship of the processor 110 relative to other devices, and in practical applications, the number of the processors 110 may be one or more; similarly, the memory 111 illustrated in fig. 5 is also used in the same sense, that is, it is only used to refer to the position relationship of the memory 111 relative to other devices, and in practical applications, the number of the memory 111 may be one or more. The processor 110 is configured to implement the image processing method applied to the above apparatus when the computer program is executed.
The apparatus may further comprise: at least one network interface 112. The various components of the device are coupled together by a bus system 113. It will be appreciated that the bus system 113 is used to enable communications among the components. The bus system 113 includes a power bus, a control bus, and a status signal bus in addition to the data bus. For clarity of illustration, however, the various buses are labeled as bus system 113 in FIG. 5.
The memory 111 may be a volatile memory or a nonvolatile memory, or may include both volatile and nonvolatile memories. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Synchronous Static Random Access Memory (SSRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Enhanced Synchronous Dynamic Random Access Memory (Enhanced DRAM), Synchronous Dynamic Random Access Memory (SLDRAM), Direct Memory (DRmb Access), and Random Access Memory (DRAM). The memory 111 described in connection with the embodiments of the invention is intended to comprise, without being limited to, these and any other suitable types of memory.
The memory 111 in embodiments of the present invention is used to store various types of data to support the operation of the device. Examples of such data include: any computer program for operating on the device, such as operating systems and application programs; contact data; telephone book data; a message; a picture; video, etc. The operating system includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application programs may include various application programs such as a Media Player (Media Player), a Browser (Browser), etc. for implementing various application services. Here, the program that implements the method of the embodiment of the present invention may be included in an application program.
Based on the same inventive concept of the foregoing embodiments, this embodiment further provides a computer storage medium, where a computer program is stored in the computer storage medium, where the computer storage medium may be a Memory such as a magnetic random access Memory (FRAM), a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a flash Memory (flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read Only Memory (CD-ROM), and the like; or may be a variety of devices including one or any combination of the above memories, such as a mobile phone, computer, tablet device, personal digital assistant, etc. When the computer program stored in the computer storage medium is executed by a processor, the image processing method applied to the device is realized. Please refer to the description of the embodiment shown in fig. 1 for a specific step flow realized when the computer program is executed by the processor, which is not described herein again.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
As used herein, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, including not only those elements listed, but also other elements not expressly listed.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A picture processing method, characterized in that the method comprises:
acquiring a picture to be marked;
taking the picture to be labeled as the input of a preset picture labeling model to obtain a picture label of the picture to be labeled; the image annotation model is obtained by training based on at least one historical image and a corresponding user-defined image tag;
and displaying the picture label of the picture to be marked.
2. The method according to claim 1, applied to a terminal, wherein before the step of inputting the picture to be labeled as a preset picture labeling model to obtain a picture label corresponding to the picture to be labeled, the method comprises:
sending a picture labeling model training request to a cloud server, wherein the picture labeling model request comprises at least one historical picture and a corresponding user-defined picture tag;
and receiving a picture marking model which is sent by the cloud server and established by taking the at least one historical picture as the input of the model and taking the corresponding user-defined picture tag as the output of the model.
3. The method according to claim 1, wherein the obtaining the picture to be labeled comprises:
and receiving a picture selection instruction input by a user, and determining the picture selected by the user as a picture to be annotated.
4. The method of claim 1, further comprising:
receiving a picture label editing instruction, and determining a target picture label of the picture to be labeled according to the picture label editing instruction.
5. The method according to claim 4, wherein the receiving a picture tag editing instruction, and determining a target picture tag of the picture to be annotated according to the picture tag editing instruction comprises:
and receiving a picture tag selection instruction input by a user, and determining the picture tag selected by the user as a target picture tag of the picture to be labeled.
6. The method of claim 4, further comprising:
adding the target picture label to the picture to be labeled;
and storing the to-be-labeled picture added with the label to a storage position corresponding to the target picture label according to the set corresponding relation between the picture label and the storage position.
7. The method of claim 1, further comprising:
receiving a picture query instruction;
and inquiring and displaying the picture with the picture tag corresponding to the picture inquiry instruction according to the set corresponding relation between the picture tag and the picture.
8. The method of claim 1, further comprising:
and updating the picture labeling model according to the picture to be labeled and the picture label of the picture to be labeled.
9. A picture processing apparatus, characterized in that the apparatus comprises a processor and a memory for storing a program; when the program is executed by the processor, the processor is caused to implement the picture processing method according to any one of claims 1 to 8.
10. A computer storage medium, characterized in that a computer program is stored which, when executed by a processor, implements the picture processing method of any one of claims 1 to 8.
CN202010018567.7A 2020-01-08 2020-01-08 Picture processing method and device and computer storage medium Pending CN111274427A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010018567.7A CN111274427A (en) 2020-01-08 2020-01-08 Picture processing method and device and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010018567.7A CN111274427A (en) 2020-01-08 2020-01-08 Picture processing method and device and computer storage medium

Publications (1)

Publication Number Publication Date
CN111274427A true CN111274427A (en) 2020-06-12

Family

ID=71003090

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010018567.7A Pending CN111274427A (en) 2020-01-08 2020-01-08 Picture processing method and device and computer storage medium

Country Status (1)

Country Link
CN (1) CN111274427A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117743622A (en) * 2023-12-20 2024-03-22 武汉荆楚点石数码设计有限公司 Picture tag generation method, device and equipment
CN117743622B (en) * 2023-12-20 2024-08-02 武汉荆楚点石数码设计有限公司 Picture tag generation method, device and equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107391703A (en) * 2017-07-28 2017-11-24 北京理工大学 The method for building up and system of image library, image library and image classification method
CN107622281A (en) * 2017-09-20 2018-01-23 广东欧珀移动通信有限公司 Image classification method, device, storage medium and mobile terminal
CN108416003A (en) * 2018-02-27 2018-08-17 百度在线网络技术(北京)有限公司 A kind of picture classification method and device, terminal, storage medium
CN108734227A (en) * 2018-06-13 2018-11-02 北京宏岸图升网络技术有限公司 A kind of sorting technique and device of picture
CN109299296A (en) * 2018-11-01 2019-02-01 郑州云海信息技术有限公司 A kind of interactive image text marking method and system
CN109376868A (en) * 2018-09-30 2019-02-22 北京字节跳动网络技术有限公司 Information management system
CN109934194A (en) * 2019-03-20 2019-06-25 深圳市网心科技有限公司 Picture classification method, edge device, system and storage medium
CN110427869A (en) * 2019-07-30 2019-11-08 东莞弓叶互联科技有限公司 A kind of distal end visual selection recognition methods for garbage disposal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107391703A (en) * 2017-07-28 2017-11-24 北京理工大学 The method for building up and system of image library, image library and image classification method
CN107622281A (en) * 2017-09-20 2018-01-23 广东欧珀移动通信有限公司 Image classification method, device, storage medium and mobile terminal
CN108416003A (en) * 2018-02-27 2018-08-17 百度在线网络技术(北京)有限公司 A kind of picture classification method and device, terminal, storage medium
CN108734227A (en) * 2018-06-13 2018-11-02 北京宏岸图升网络技术有限公司 A kind of sorting technique and device of picture
CN109376868A (en) * 2018-09-30 2019-02-22 北京字节跳动网络技术有限公司 Information management system
CN109299296A (en) * 2018-11-01 2019-02-01 郑州云海信息技术有限公司 A kind of interactive image text marking method and system
CN109934194A (en) * 2019-03-20 2019-06-25 深圳市网心科技有限公司 Picture classification method, edge device, system and storage medium
CN110427869A (en) * 2019-07-30 2019-11-08 东莞弓叶互联科技有限公司 A kind of distal end visual selection recognition methods for garbage disposal

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117743622A (en) * 2023-12-20 2024-03-22 武汉荆楚点石数码设计有限公司 Picture tag generation method, device and equipment
CN117743622B (en) * 2023-12-20 2024-08-02 武汉荆楚点石数码设计有限公司 Picture tag generation method, device and equipment

Similar Documents

Publication Publication Date Title
US9251252B2 (en) Context server for associating information based on context
US10353943B2 (en) Computerized system and method for automatically associating metadata with media objects
US8055271B2 (en) Intelligent location-to-cell mapping using annotated media
US20200195980A1 (en) Video information processing method, computer equipment and storage medium
US20080222212A1 (en) Peer-to-peer data synchronization architecture
US20090089322A1 (en) Loading predicted tags onto electronic devices
CN104135716A (en) Push method and system of interest point information
US11586683B2 (en) Methods, systems and recording mediums for managing conversation contents in messenger
US20090022123A1 (en) Apparatus and method for providing contents sharing service on network
US20080126960A1 (en) Context server for associating information with a media object based on context
US10057606B2 (en) Systems and methods for automated application of business rules using temporal metadata and content fingerprinting
CN108829753A (en) A kind of information processing method and device
EP4080507A1 (en) Method and apparatus for editing object, electronic device and storage medium
US20120150881A1 (en) Cloud-hosted multi-media application server
CN109756348B (en) Batch calling method and device
CN112146672A (en) Navigation method, navigation device and computer storage medium
CN111443903A (en) Software development file acquisition method and device, electronic equipment and storage medium
CN111274427A (en) Picture processing method and device and computer storage medium
US10068065B2 (en) Assignment of a machine-readable link to content as a payoff
CN109522286A (en) The treating method and apparatus of file system
US9170123B2 (en) Method and apparatus for generating information
US8560370B2 (en) Methods, systems, and computer products for adding map component to address book
CN101426020A (en) Method, system and apparatus for uploading map blog
US10296532B2 (en) Apparatus, method and computer program product for providing access to a content
CN111428613A (en) Data processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination