CN110888582B - Tag information processing method, device, storage medium and terminal - Google Patents

Tag information processing method, device, storage medium and terminal Download PDF

Info

Publication number
CN110888582B
CN110888582B CN201911164338.XA CN201911164338A CN110888582B CN 110888582 B CN110888582 B CN 110888582B CN 201911164338 A CN201911164338 A CN 201911164338A CN 110888582 B CN110888582 B CN 110888582B
Authority
CN
China
Prior art keywords
picture
marking
information
task
browser
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911164338.XA
Other languages
Chinese (zh)
Other versions
CN110888582A (en
Inventor
郭子亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oppo Chongqing Intelligent Technology Co Ltd
Original Assignee
Oppo Chongqing Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo Chongqing Intelligent Technology Co Ltd filed Critical Oppo Chongqing Intelligent Technology Co Ltd
Priority to CN201911164338.XA priority Critical patent/CN110888582B/en
Publication of CN110888582A publication Critical patent/CN110888582A/en
Application granted granted Critical
Publication of CN110888582B publication Critical patent/CN110888582B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements

Abstract

The application provides a tag information processing method, a tag information processing device, a storage medium and a terminal, wherein the method comprises the following steps: receiving a marking instruction input aiming at a task picture loaded in a browser canvas container; acquiring target marking information corresponding to the marking instruction; and integrating the target mark information to generate an integration result corresponding to the target mark information. The method has the advantages that the special application does not need to be downloaded, the picture marking function can be realized by simply operating on the existing browser page, the marking information is automatically integrated, the integrated result is generated, the manpower required by the picture processing work can be saved, the marking information processing time is saved, and the use experience of a user is improved.

Description

Tag information processing method, device, storage medium and terminal
Technical Field
The present application relates to the field of image processing, and in particular, to a method and an apparatus for processing tag information, a storage medium, and a terminal.
Background
In order to meet business requirements, pictures are generally required to be labeled, and most of the existing schemes perform picture labeling processing by downloading special desktop applications. Specifically, a marking person loads a picture to be marked to an application window, and then selects a shape or a path built in the application to draw, move or adjust the picture. And exporting the labeling information after the labeling is finished, and further editing and sorting the labeling information and uploading the labeling information to a server. The image processing work needs to be completed manually, and more time is consumed.
Disclosure of Invention
In order to solve the above problem, embodiments of the present application provide a method, an apparatus, a storage medium, and a terminal for processing tag information.
In a first aspect, an embodiment of the present application provides a method for processing tag information, including the following steps:
receiving a marking instruction input aiming at a task picture loaded in a browser canvas container;
acquiring target marking information corresponding to the marking instruction;
and integrating the target mark information to generate an integration result corresponding to the target mark information.
In a second aspect, an embodiment of the present application provides a tag information processing apparatus, including:
the instruction receiving unit is used for receiving a marking instruction input aiming at the task picture loaded in the browser canvas container;
the information acquisition unit is used for acquiring target marking information corresponding to the marking instruction;
and the result generating unit is used for integrating the target mark information and generating an integration result corresponding to the target mark information.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the above-mentioned tag information processing method.
In a fourth aspect, an embodiment of the present application provides a terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the above tag information processing method when executing the program.
According to the markup information processing method, the device, the storage medium and the terminal, the terminal receives a markup instruction input aiming at a task picture loaded in a canvas container of a browser, acquires target markup information corresponding to the markup instruction, and then integrates the target markup information to generate an integration result corresponding to the target markup information. The method has the advantages that the special application does not need to be downloaded, the picture marking function can be realized by simply operating on the existing browser page, the marking information is automatically integrated, the integrated result is generated, the manpower required by the picture processing work can be saved, the marking information processing time is saved, and the use experience of a user is improved.
Drawings
Fig. 1 is a schematic diagram of an application scenario to which tag information processing according to an embodiment of the present application may be applied;
fig. 2 is a schematic flowchart of a method for processing tag information according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of another tag information processing method according to an embodiment of the present disclosure;
fig. 4a is a schematic view of an operation interface corresponding to a marking instruction according to an embodiment of the present disclosure;
fig. 4b is a schematic view of an operation interface corresponding to another marking instruction provided in the embodiment of the present application;
FIG. 4c is a schematic view of an operation interface corresponding to another marking instruction provided in the embodiment of the present application;
fig. 5 is a schematic flowchart illustrating a method for implementing tag information processing according to an embodiment of the present application;
fig. 6 is a schematic interface diagram of a method for processing start mark information according to an embodiment of the present disclosure;
fig. 7 is a schematic flowchart of another method for processing tag information according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a tag information processing apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The present application is further described with reference to the following figures and examples.
In the following description, the terms "first" and "second" are used for descriptive purposes only and are not intended to indicate or imply relative importance. The following description provides embodiments of the present application, where different embodiments may be substituted or combined, and thus the present application is intended to include all possible combinations of the same and/or different embodiments described. Thus, if one embodiment includes feature A, B, C and another embodiment includes feature B, D, then this application should also be considered to include an embodiment that includes one or more of all other possible combinations of A, B, C, D, even though this embodiment may not be explicitly recited in text below.
The following description provides examples, and does not limit the scope, applicability, or examples set forth in the claims. Changes may be made in the function and arrangement of elements described without departing from the scope of the disclosure. Various examples may omit, substitute, or add various procedures or components as appropriate. For example, the described methods may be performed in an order different than the order described, and various steps may be added, omitted, or combined. Furthermore, features described with respect to some examples may be combined into other examples.
Fig. 1 is a schematic diagram illustrating an exemplary system architecture to which a tag information processing method or processing apparatus according to an embodiment of the present application may be applied.
As shown in fig. 1, the system may include a plurality of terminals 101 and a server 102. A plurality of terminals 101 and a server 102 may be connected via a network. The network may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few. It should be understood that the number of terminals, networks, and servers in fig. 1 are merely illustrative. There may be any number of terminals, networks, and servers, as desired for an implementation. For example, the server 102 may be a server cluster composed of a plurality of servers, and the like.
The server 102 sends the pictures and the picture-related marking instructions to the terminal 101 via the network. The terminal 101 receives a marking instruction input aiming at a task picture loaded in a browser canvas container; acquiring target marking information corresponding to the marking instruction; integrating the target mark information to generate an integration result corresponding to the target mark information; and uploads the integration result to the server 102.
Referring to fig. 2, fig. 2 is a schematic flowchart of a method for processing tag information according to an embodiment of the present application, where the method includes:
s201, receiving a marking instruction input aiming at the task picture loaded in the browser canvas container.
The browser is various browsers which can be installed on the terminal, such as a dog searching browser, a 360 browser and the like. The browser can achieve the technical scheme of the application, and the application does not limit the type and the version of the browser.
The browser canvas container may be any tool, such as canvas, based on which a browser can operate on pictures. The server side can send the picture to be operated to the terminal, and the user can browse and operate the picture by using the browser canvas container without installing a corresponding client side. Therefore, the method provided by the embodiment of the application can reduce the operation and maintenance burden of the terminal and reduce the storage space required by the terminal.
The marking instruction can be from marking information and marking action input by a user on the picture. For example: the marking instructions are labels that add "girls" to the picture. The marking instructions may also be that a "girl" is added to the picture as a primary label and a "character" is added to the picture as a secondary label. The marking instruction may also be to check or select a specific area in the picture by shape or path, etc.
The marking instructions may also come from the server side. For example, multiple users may operate on the same picture online at the same time. The server acquires the marking instructions of different users for the pictures, and distributes the marking instructions to other users, so that each user can check the processing results of the other users for the pictures through the browser, and a plurality of users can process the marked pictures together.
In the case where a plurality of users perform the photo marking process simultaneously online, at least one administrator account may exist. The administrator account may distribute different picture-based marking instructions to different users. The user with the administrator authority can also distribute the marking instructions based on the same task picture to different users. Generally, users with ordinary rights can only view task pictures and corresponding marking instructions allocated to the users.
As an implementable mode, the user with the administrator authority can distribute the task picture and the corresponding marking instruction to the user with the ordinary authority in a linking mode. Whereas the picture processing job may involve a number of different users, the links are preferably compatible with different versions and types of browsers.
And S202, acquiring target marking information corresponding to the marking instruction.
The target marking information is information acquired based on the marking instruction and the processing picture. The target mark information may include position information, mark content information, and the like, wherein the position information may be represented by a series of coordinate points.
For example, if the marking instruction is to add a label of "girl" to the picture, the target marking information may be the label content "girl". The marking instruction may further be that a "girl" is added to the picture as a primary label, and a "character" is a secondary label, so that the target marking information may be a content "girl" corresponding to the primary label and a content "character" corresponding to the secondary label. The marking instruction may be to check or select a specific area in the picture by a shape or a path, and the target marking information may be a series of coordinate points determined according to the shape or the path.
S203, integrating the target mark information to generate an integration result corresponding to the target mark information.
The integration process is a process that can make a user or a system more conveniently search for or use a picture. The integration process may include a classification process of the picture based on the target mark information. Specifically, the system may extract a keyword in the target mark information, calculate similarity between the keyword and a keyword of an existing picture category, and use the picture category with the largest similarity value as the category of the picture. The integration processing may further include establishing a mapping relationship between the marker position and the marker content, generating a mapping relationship table, and determining the mapping relationship table as an integration result corresponding to the target marker information.
According to the tag information processing method provided by the embodiment of the application, after a user inputs a tag instruction in a picture, a system can automatically acquire target tag information corresponding to the tag instruction and integrate the target tag information to generate an integration result. Therefore, the technical scheme of the application can solve the problem that the image marking information processing method in the prior art needs to consume more manpower and time.
Referring to fig. 3, fig. 3 is a schematic flowchart of another tag information processing method provided in an embodiment of the present application, where the method includes:
s301, judging whether the task picture is marked before the current task.
The flag bit corresponding to the task picture can be stored locally by a user, or a marking bit for marking whether the task picture is marked or not can be added in a database of the server side, so that whether the task picture is marked or not before the current task can be determined through the marking bit. Whether the task picture is marked or not can be determined in other ways, and the application is not limited in any way by the implementation manner.
S302, if the task picture is marked before the current task, historical marking information corresponding to the task picture is obtained from the server.
The historical marking information is marking information of the task picture before the current task. The marking information may include: the task picture processing method comprises the steps of marking position information of a task picture, marking content information of the task picture, keywords corresponding to the task picture, category information corresponding to the task picture and the like.
It should be noted that the terminal needs to complete the following two steps: and acquiring a task picture and acquiring historical marking information corresponding to the task picture. As to the execution sequence of the two steps, the embodiment of the present application is not limited at all, as long as the task picture displaying the history mark information can be loaded in the browser canvas.
It should be noted that, if the task picture is labeled multiple times before the current task, the history mark information includes all mark information corresponding to multiple labeling of the task picture between the current tasks.
S303, loading the task picture in a browser canvas, and displaying the historical marking information on the task picture.
If the user only completes part of the work of processing the picture or finds that the completed picture marking information has a problem, the marking processing method provided by the embodiment of the application enables the user to continue to complete the work of marking the picture based on the last completed picture processing work.
S304, if the task picture is not marked before the current task, the task picture is obtained from the server, and the task picture is loaded in the canvas of the browser.
S305, obtaining the mark type from the server.
The mark types may be assigned by the administrator account for different task pictures and users. The types of markers may include: picture classification, object selection by frame, object classification in the picture and the like.
S306, receiving a marking instruction input aiming at the task picture loaded in the browser canvas container.
S307, acquiring target marking information corresponding to the marking instruction, wherein the target marking information corresponds to the marking types one to one.
The system of embodiments of the present application may provide a variety of marker types. And the user adds corresponding marks for the task pictures according to the mark types. Fig. 4a is a schematic view of an operation interface corresponding to a marking instruction according to an embodiment of the present disclosure. As shown in fig. 4a, the current tag types are: and (5) classifying the pictures. The user adds target mark information to the task picture based on the mark type as follows: flamingo birds. The system will then perform a sorting operation based on the target mark information, such as sorting the pictures into image categories of birds.
Fig. 4b is a schematic view of an operation interface corresponding to another marking instruction provided in the embodiment of the present application. As shown in fig. 4b, the current tag types are: and (5) selecting the object in a frame mode. The user selects two specific regions in the picture frame based on the mark type, and the specific regions correspond to the two flamingos. The system can subsequently perform image analysis processing such as edge recognition, cluster analysis and the like according to the specific area selected by the frame.
Fig. 4c is a schematic view of an operation interface corresponding to another marking instruction provided in the embodiment of the present application. As shown in fig. 4c, the current tag types are: the objects in the map are classified. The user selects two specific areas in the picture frame based on the mark type, the specific areas correspond to two flamingos, and target mark information is added to the task picture, wherein the target mark information comprises the following steps: flamingo birds. The system can further analyze and arrange the target mark information according to the target mark information.
S308, integrating the target mark information to generate an integration result corresponding to the target mark information.
According to the method provided by the embodiment of the application, multiple mark types can be set in the system, and the user adds marks to the pictures according to the mark types. On one hand, the user can conveniently and flexibly process the pictures; on the other hand, a system administrator can allocate different pictures and mark types to different users according to actual needs, so that multiple users can cooperate to complete picture marking work.
Referring to fig. 5, fig. 5 is a schematic flowchart of a method for processing tag information according to an embodiment of the present application, where the method includes:
s501, receiving a marking instruction input aiming at the task picture loaded in the browser canvas container.
Fig. 6 is an interface schematic diagram of a method for processing start mark information according to an embodiment of the present application. As shown in fig. 6, the user may launch the tagged information processing application in the embodiment by double-clicking a "browser" icon in the desktop with a mouse and opening a link corresponding to the photo tagging application in the browser. The browser page can load the corresponding task picture by calling canvas through JavaScript for the user to operate.
And S502, acquiring target marking information corresponding to the marking instruction.
The target marking information is information acquired based on the marking instruction and the processing picture. The target mark information can comprise position information, mark content information and the like, wherein the position information can be represented by a series of coordinate points, and the mark content is text information added to the picture by a user and generally comprises information such as the type, the geographic position, the subject meaning and the like of the picture.
S503, establishing a mapping relation between the mark position and the mark content, and generating a mapping relation table.
The marking instruction may be to check or select a specific area in the picture by a shape or a path, and the target marking information may be a series of coordinate points determined according to the shape or the path. For example, the mapping relationship of the mark position and the mark content can be as shown in table 1. As shown in table 1, in the picture with the picture ID of 1, the points with the coordinates of (1600, 100), (1600, 200), (1600, 300), (1600, 400), and (1600, 500) respectively form a path, and the mark content corresponding to the specific area selected by the path is "sea". In the picture with the picture ID of 1, a rectangle is formed by points with coordinates (800, 405), (800, 611), (915, 405) (915, 405), and the mark content corresponding to the specific area defined by the rectangle is "sea".
TABLE 1
Figure BDA0002286980910000091
S504, determining the mapping relation table as an integration result corresponding to the target mark information.
And S505, uploading the integration result to the server.
Compared with a method of manually uploading the integration result to a server. The method can automatically upload the integration result to the server, improves the system efficiency, and improves the use experience of the user on the system.
S506, obtaining a next picture of the task picture, determining the next picture as the task picture, and executing the step of receiving a marking instruction input aiming at the task picture loaded in the canvas container of the browser.
And S507, when the next picture does not exist, quitting the browser canvas container.
According to the method provided by the embodiment of the application, the mapping relation table is automatically generated based on the marking position and the marking content, and the integration result corresponding to the mapping relation table is automatically uploaded to the server. Therefore, compared with the scheme that the user needs to manually integrate the marking information subsequently, the technical scheme of the embodiment of the application can save the manpower required by the image processing work and save the time of the user.
The inventor finds that the existing scheme for marking pictures through desktop software has various inconveniences or defects in work. Firstly, the labeling type is single, and the labeling type often only has a shape or path drawing function, so that diversified business requirements cannot be met; secondly, the marked content is based on the original data, and the operator needs to further perform processing operations such as data merging and cleaning, so that the task amount and complexity of the work are increased; in addition, desktop software often does not have an automatic upload server function; and finally, the client software does not carry out secondary labeling or audit acceptance flow on the preprocessed content or the processed content.
In the solution described in the embodiment of the present invention, the picture and the labeling result are obtained from the server side to the browser local, the picture and the labeling content are loaded into a canvas container by using JavaScript, and a user classifies the whole picture or draws a shape or a path by using a preset shape or a custom path to label the drawn shape. According to the scheme, the marked result content is subjected to primary processing by a program according to the service requirement, and the processed marked result can be automatically uploaded to a server side for storage. The scheme makes up for a plurality of defects of the desktop software for marking the pictures, expands the service range and greatly improves the production efficiency.
The scheme is applied to the picture data acquisition stage in the field of artificial intelligence, and is a front-end module covering the whole process of picture labeling processing, and labeling personnel label pictures acquired from a server in a browser page and then store results to the server to realize the labeling business of the pictures. And the marking personnel judges or identifies the whole picture or the specific object in the picture according to the requirement, classifies the whole picture or draws marks such as rectangles, circles, polygons, points, irregular paths and the like on specific contents in the picture and notes the marks. And the marking module processes the marked result in advance and automatically sends the result to the server for storage. The client side image annotation module also realizes recovery and restoration of image annotation content and is used for processing logics such as pre-annotation or auditing.
Fig. 7 is a flowchart of a method for implementing tag information processing according to an embodiment of the present application. As shown in fig. 7, the method includes:
step 1, a marking person newly builds a picture marking task through a marking platform, sets marking types and other task parameters, and uploads a picture file.
The annotating personnel in step 1 are typically operators with administrator privileges. The operator distributes work for other operators by newly establishing a picture marking task, setting a marking type and the like.
And 2, entering a labeling page, and acquiring labeling task information and pictures from the server by the page.
The annotating personnel of steps 2-9 are typically operators of ordinary authority. And (4) completing the labeling task distributed by the operator with the administrator authority in the step 1 through the operation steps of the step 2 to the step 9.
And 3, loading the picture into a canvas container by the browser, and if the labeling task is secondary labeling (such as manual examination of algorithm pre-labeled content or examination and acceptance flow of labeled results), recovering the labeled content corresponding to the picture by the canvas.
And 4, if the labeling type is to classify the whole graph, the labeling personnel selects the label or inputs the custom labeling content.
And 5, if the labeling task is to select specific content in the picture frame, the labeling personnel uses a mouse to select a built-in shape (such as a rectangle, a circle, a point, a polygon and the like) or a custom path to draw, move or adjust the size of the shape or the path in the picture.
And 6, if the content in the step 5 needs to be labeled, selecting a label or inputting custom labeled content in a corresponding shape or path in the picture or the page list.
Through the steps 5 and 6, the pictures can be labeled for multiple times.
And 7, the labeling module can sort all the original contents labeled in the steps 4 to 6 according to the service requirements and system settings.
The classification processing work of the picture can be completed by step 7. The mapping relationship between the labeling position and the labeling content of the picture can also be sorted through the step 7. The subsequent work such as picture analysis and arrangement can be conveniently carried out through the step 7.
And 8, uploading the preliminarily preprocessed picture labeling result in the step 7 to a server.
And 9, switching the pictures for the next marking through the previous picture, the next picture or the list.
Step 9 can make the operator conveniently select the picture to be operated, and ensure that the picture distributed to the operator is processed and uploaded to the server without omission.
Fig. 2 to 7 describe the tag information processing method in detail according to the embodiment of the present application.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a tag information processing apparatus according to an embodiment of the present application, and as shown in fig. 8, the tag information processing apparatus includes:
an instruction receiving unit 801, configured to receive a markup instruction input for a task picture loaded in a browser canvas container;
an information obtaining unit 802, configured to obtain target marking information corresponding to the marking instruction;
a result generating unit 803, configured to perform integration processing on the target marker information, and generate an integration result corresponding to the target marker information.
Optionally, the apparatus further comprises:
the picture loading unit 804 is configured to obtain a task picture from a server, and load the task picture in a browser canvas.
Optionally, the apparatus further comprises:
a judging unit 805, configured to judge whether the task picture is marked before the current task;
a history information obtaining unit 806, configured to obtain history flag information corresponding to the task picture from the server if the task picture is marked before the current task;
the picture loading unit 804 is further configured to load the task picture in a browser canvas and display the history flag information on the task picture.
Optionally, the apparatus further comprises:
a type obtaining unit 807 for obtaining a mark type to the server;
the information obtaining unit 802 is specifically configured to obtain target mark information corresponding to the mark instruction, where the target mark information corresponds to the mark types one to one.
Optionally, the result generating unit 803 is specifically configured to:
establishing a mapping relation between the mark position and the mark content to generate a mapping relation table;
and determining the mapping relation table as an integration result corresponding to the target mark information.
Optionally, the apparatus further includes a result uploading unit 808, configured to upload the integrated result to the server.
Optionally, the apparatus further comprises:
a picture obtaining unit 809, configured to obtain a next picture of the task picture, determine the next picture as the task picture, and perform the step of receiving a marking instruction input for the task picture loaded in the browser canvas container;
a program exit unit 810 for exiting the browser canvas container when it is determined that there is no next picture.
It is clear to a person skilled in the art that the solution according to the embodiments of the present application can be implemented by means of software and/or hardware. The "unit" and "module" in this specification refer to software and/or hardware that can perform a specific function independently or in cooperation with other components, where the hardware may be, for example, an FPGA (Field-Programmable Gate Array), an IC (Integrated Circuit), or the like.
Each processing unit and/or module in the embodiments of the present application may be implemented by an analog circuit that implements the functions described in the embodiments of the present application, or may be implemented by software that executes the functions described in the embodiments of the present application.
Embodiments of the present application also provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the above-mentioned tag information processing method. The computer-readable storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, DVD, CD-ROMs, microdrive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data.
Referring to fig. 9, a schematic structural diagram of a terminal according to an embodiment of the present application is shown, where the terminal may be used to implement the tag information processing method provided in the foregoing embodiment. Specifically, the method comprises the following steps:
the memory 910 may be used to store software programs and modules, and the processor 920 may execute various functional applications and data processing by operating the software programs and modules stored in the memory 910. The memory 910 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the terminal, etc. Further, the memory 910 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 910 may also include a memory controller to provide the processor 920 and the input unit 930 with access to the memory 910.
The input unit 930 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, the input unit 930 may include a touch-sensitive surface 931 (e.g., a touch screen, a touch pad, or a touch frame). A touch-sensitive surface 931, also referred to as a touch display screen or touch pad, may collect touch operations by a user thereon or nearby.
The display unit 940 may be used to display information input by or provided to the user and various graphic user interfaces of the terminal, which may be configured by graphics, text, icons, video, and any combination thereof. The Display unit 940 may include a Display panel 941, and optionally, the Display panel 941 may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like.
The processor 920 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory 910 and calling data stored in the memory 910, thereby integrally monitoring the terminal. Optionally, processor 920 may include one or more processing cores; processor 920 may integrate an application processor that handles operating system, user interface, and applications, among others, and a modem processor that handles wireless communications, among others. It will be appreciated that the modem processor described above may not be integrated into processor 920.
Specifically, in this embodiment, the display unit of the terminal is a touch screen display, the terminal further includes a memory and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the one or more processors, and the one or more programs include steps for implementing the tag information processing method.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
All functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (9)

1. A method of tag information processing, the method comprising:
receiving a marking instruction input aiming at a task picture loaded in a browser canvas container, wherein the marking instruction is from marking information and marking action input by a user on the task picture;
obtaining a mark type from a server, wherein the mark type comprises: at least one of picture classification, framing objects and object classification in the picture;
acquiring target marking information corresponding to the marking instruction, wherein the target marking information corresponds to the marking types one to one;
and integrating the target mark information to generate an integration result corresponding to the target mark information, wherein the integration processing comprises classifying pictures based on the target mark information.
2. The method of claim 1, wherein prior to receiving the markup instructions for task picture input loaded in the browser canvas container, further comprising:
and acquiring a task picture from a server, and loading the task picture in a canvas of a browser.
3. The method of claim 2, wherein prior to loading the task picture in the browser canvas, further comprising:
judging whether the task picture is marked before the current task;
if the task picture is marked before the current task, acquiring historical marking information corresponding to the task picture from the server;
the loading of the task picture in the browser canvas comprises:
and loading the task picture in a browser canvas, and displaying the historical marking information on the task picture.
4. The method according to claim 1, wherein the target mark information includes a mark position and a mark content, and the integrating the mark information to generate an integration result corresponding to the target mark information includes:
establishing a mapping relation between the mark position and the mark content to generate a mapping relation table;
and determining the mapping relation table as an integration result corresponding to the target mark information.
5. The method according to claim 1, wherein after generating the integration result corresponding to the target mark information, the method further comprises:
and uploading the integration result to the server.
6. The method of claim 5, wherein after uploading the consolidated result to the server, further comprising:
acquiring a next picture of the task picture, determining the next picture as the task picture, and executing the step of receiving a marking instruction input aiming at the task picture loaded in the canvas container of the browser;
and when determining that the next picture does not exist, exiting the browser canvas container.
7. A tag information processing apparatus, characterized in that the apparatus comprises:
the instruction receiving unit is used for receiving a marking instruction input aiming at the task picture loaded in the browser canvas container, and the marking instruction is from marking information and marking action input by a user on the task picture;
a type obtaining unit, configured to obtain a token type from a server, where the token type includes: at least one of picture classification, framing objects and object classification in the picture;
the information acquisition unit is specifically configured to acquire target mark information corresponding to the mark instruction, where the target mark information corresponds to the mark types one to one;
and the result generating unit is used for integrating the target mark information and generating an integration result corresponding to the target mark information, wherein the integration processing comprises classifying pictures based on the target mark information.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
9. A terminal comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1-6 are implemented when the program is executed by the processor.
CN201911164338.XA 2019-11-25 2019-11-25 Tag information processing method, device, storage medium and terminal Active CN110888582B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911164338.XA CN110888582B (en) 2019-11-25 2019-11-25 Tag information processing method, device, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911164338.XA CN110888582B (en) 2019-11-25 2019-11-25 Tag information processing method, device, storage medium and terminal

Publications (2)

Publication Number Publication Date
CN110888582A CN110888582A (en) 2020-03-17
CN110888582B true CN110888582B (en) 2022-01-25

Family

ID=69748567

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911164338.XA Active CN110888582B (en) 2019-11-25 2019-11-25 Tag information processing method, device, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN110888582B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111367445B (en) * 2020-03-31 2021-07-09 中国建设银行股份有限公司 Image annotation method and device
CN112084755A (en) * 2020-07-31 2020-12-15 武汉光庭信息技术股份有限公司 Method and system for realizing picture marking system based on WEB
CN112598073A (en) * 2020-12-28 2021-04-02 南方电网深圳数字电网研究院有限公司 Power grid equipment image labeling method, electronic equipment and storage medium
CN115037952A (en) * 2021-03-05 2022-09-09 上海哔哩哔哩科技有限公司 Marking method, device and system based on live broadcast
CN113254221A (en) * 2021-07-09 2021-08-13 武汉精创电子技术有限公司 Task execution system and method for defect labeling
CN114359367B (en) * 2022-03-15 2022-06-28 深圳市华付信息技术有限公司 Data labeling method and device, computer equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999535A (en) * 2011-09-19 2013-03-27 阿里巴巴集团控股有限公司 Information display method, information acquisition method, client terminal and server
CN103092833A (en) * 2011-10-27 2013-05-08 腾讯科技(深圳)有限公司 Method, apparatus and mobile device for viewing pictures in mobile browser
CN103425690A (en) * 2012-05-22 2013-12-04 湖南家工场网络技术有限公司 Picture information labeling and displaying method based on cascading style sheets
CN107450818A (en) * 2017-08-18 2017-12-08 深圳易嘉恩科技有限公司 Photo Browser based on primary JavaScript and html
CN108897826A (en) * 2018-06-22 2018-11-27 上海哔哩哔哩科技有限公司 Banner picture rapid generation, system and storage medium
CN110377777A (en) * 2019-06-29 2019-10-25 苏州浪潮智能科技有限公司 A kind of multiple mask method of picture based on deep learning and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306175A (en) * 2011-08-25 2012-01-04 北京商纳科技有限公司 Personal knowledge management method and device
CN102799371B (en) * 2012-06-29 2014-11-26 北京奇虎科技有限公司 Extended data input device and method
CN103714115B (en) * 2013-10-29 2018-03-30 北京奇虎科技有限公司 The loading method and device of a kind of web page contents
CN109753582A (en) * 2018-12-27 2019-05-14 西北工业大学 The method of magnanimity photoelectricity ship images quick-searching based on Web and database

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999535A (en) * 2011-09-19 2013-03-27 阿里巴巴集团控股有限公司 Information display method, information acquisition method, client terminal and server
CN103092833A (en) * 2011-10-27 2013-05-08 腾讯科技(深圳)有限公司 Method, apparatus and mobile device for viewing pictures in mobile browser
CN103425690A (en) * 2012-05-22 2013-12-04 湖南家工场网络技术有限公司 Picture information labeling and displaying method based on cascading style sheets
CN107450818A (en) * 2017-08-18 2017-12-08 深圳易嘉恩科技有限公司 Photo Browser based on primary JavaScript and html
CN108897826A (en) * 2018-06-22 2018-11-27 上海哔哩哔哩科技有限公司 Banner picture rapid generation, system and storage medium
CN110377777A (en) * 2019-06-29 2019-10-25 苏州浪潮智能科技有限公司 A kind of multiple mask method of picture based on deep learning and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
浏览器网页如何对图片进行编辑;yqih45;《https://jingyan.baidu.com/article/20095761569ae78b0721b4a2.html》;20190724;第1-4页 *

Also Published As

Publication number Publication date
CN110888582A (en) 2020-03-17

Similar Documents

Publication Publication Date Title
CN110888582B (en) Tag information processing method, device, storage medium and terminal
CN111310934B (en) Model generation method and device, electronic equipment and storage medium
CN106775632B (en) High-performance geographic information processing method and system with flexibly-expandable business process
KR102399425B1 (en) Data labelling pre-processing, distributing and checking system
CN110163268A (en) A kind of image processing method, device and server, storage medium
US20200118305A1 (en) Automatic line drawing coloring program, automatic line drawing coloring apparatus, and graphical user interface program
CN111913808A (en) Task allocation method, device, equipment and storage medium
US20220382273A1 (en) Data collection system and recording medium
CN112785714A (en) Point cloud instance labeling method and device, electronic equipment and medium
CN112925520A (en) Method and device for building visual page and computer equipment
CN111602157B (en) Supplier Supply Chain Risk Analysis Method
CN116383693A (en) Data issuing method based on data security automatic classification grading result
CN112433650B (en) Project management method, device, equipment and storage medium
CN113626024A (en) Low code development method and device combining RPA and AI and computing equipment
US7886002B2 (en) Application collaboration system, collaboration method and collaboration program
CN116578497A (en) Automatic interface testing method, system, computer equipment and storage medium
JP2017111500A (en) Character recognizing apparatus, and program
CN113377346B (en) Integrated environment building method and device, electronic equipment and storage medium
CN115631374A (en) Control operation method, control detection model training method, device and equipment
CN111768007B (en) Method and device for mining data
CN108427557A (en) A kind of control layout display control method, device and computer readable storage medium
CN111090370A (en) Picture management method and device and computer readable storage medium
US11507728B2 (en) Click to document
KR102555733B1 (en) Object management for improving machine learning performance, control method thereof
US20220254141A1 (en) Image processing device, image processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant