CN110308848B - Label interaction method and device and computer storage medium - Google Patents

Label interaction method and device and computer storage medium Download PDF

Info

Publication number
CN110308848B
CN110308848B CN201910517714.2A CN201910517714A CN110308848B CN 110308848 B CN110308848 B CN 110308848B CN 201910517714 A CN201910517714 A CN 201910517714A CN 110308848 B CN110308848 B CN 110308848B
Authority
CN
China
Prior art keywords
label
tag
input command
control
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910517714.2A
Other languages
Chinese (zh)
Other versions
CN110308848A (en
Inventor
徐冠杰
王博
李南浩
王文力
徐小红
肖嘉熙
李均贺
黄仝宇
汪刚
宋一兵
侯玉清
刘双广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gosuncn Technology Group Co Ltd
Original Assignee
Gosuncn Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gosuncn Technology Group Co Ltd filed Critical Gosuncn Technology Group Co Ltd
Priority to CN201910517714.2A priority Critical patent/CN110308848B/en
Publication of CN110308848A publication Critical patent/CN110308848A/en
Application granted granted Critical
Publication of CN110308848B publication Critical patent/CN110308848B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/74Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a label interaction method, a label interaction device and a computer storage medium, wherein the method comprises the following steps: receiving a first input command, and generating a label management interface in response to the first input command; acquiring label data, and displaying the label data on the label management interface according to a preset rule; and receiving a second input command, responding to the second input command to control the label, and displaying a control result in the video image. According to the interaction method of the label, the efficiency and the use experience of label operation can be improved.

Description

Label interaction method and device and computer storage medium
Technical Field
The present invention relates to the field of video surveillance technology, and in particular, to a tag interaction method, a tag interaction apparatus, and a computer storage medium.
Background
Currently, most image capture devices superimpose tags to describe the real environment when presenting video images, so as to monitor the scene. Generally, for maintenance and adjustment purposes, a user performs a series of operations and controls on the tags, and the control and operation of the tags are performed one by one. Particularly, when the label maintenance and modification operations are performed on information-intensive scenes, the labels are generally required to be operated one by one, and in the process, other tools are required to record the operated and non-operated labels. It can be seen that the interaction mode of the tag control is not simple enough, and the use experience and the execution efficiency are low.
Disclosure of Invention
In view of this, the present invention provides an interaction method and apparatus using a tag, and a computer storage medium device, so as to improve efficiency and use experience of tag operation.
In order to solve the above technical problem, in one aspect, the present invention provides a tag interaction method, where the method includes: receiving a first input command, and generating a label management interface in response to the first input command; acquiring label data, and displaying the label data on the label management interface according to a preset rule; and receiving a second input command, responding to the second input command to control the label, and displaying a control result in the video image.
According to some embodiments of the present invention, the generating a tag management interface specifically includes: and acquiring a label set of the image acquisition equipment, and generating the label management interface, wherein the label management interface comprises the label set of the image acquisition equipment.
According to some embodiments of the present invention, the first input command is triggered by a user clicking a specific area, or by a user inputting an instruction using a preset key or key combination of a keyboard, or by a user using other preset non-conventional keyboard and mouse input devices, including but not limited to a joystick and a touch screen.
According to some embodiments of the invention, the acquired set of tags of the image capture device is from tag data stored in a storage device.
According to some embodiments of the invention, the preset rule comprises: and grouping rules, wherein the grouping rules are the types of the police tags, the types of the social facility tags, the shapes of the tags or the acquisition equipment to which the tags belong.
According to some embodiments of the present invention, the step of obtaining the tag data and displaying the tag data on the tag management interface according to the preset rule includes receiving a third input command, and changing a display style of the tag data on the tag management interface according to the third input command; wherein the third input command comprises: grouping commands, wherein the label data are grouped and displayed according to the grouping commands; or ordering commands, ordering the tag data according to the ordering commands; or the search command is used for searching the label data according to the name, the characters or the serial number.
According to some embodiments of the invention, the controlling act on the tag comprises: when the target of the second input command is a label and the image acquisition equipment is a ball machine, rotating the ball machine to which the label belongs to an angle capable of presenting the label; or when the target of the second input command is a label, enabling the label on the video image to present a preset selected display state; or when the second input command is targeted to a tag packet or a tag, hiding a part of the tag on the video image; or when the target of the second input command is a label group or a label, enabling a part of the label hidden on the video image to be presented on the image; or when the second input command is targeted to a tag packet or tag, causing the tag to be removed from the storage device and the video image; or when the target of the second input command is a tag group or a tag, calibrating the tag position according to a preset movement mode; or when the target of the second input command is a label group or a label, extracting the key attention content of all target labels and presenting the key attention content in a preset label management interface; or when the target of the second input command is a label group or a label, opening a secondary control interface for control.
According to some embodiments of the invention, the preset movement pattern comprises: clockwise or anticlockwise rotation movement of a reference system is formed by a ball machine and a ground plane; translating by taking the video picture as a reference system; to indicate translation of the north reference frame.
According to some embodiments of the invention, the important concerns of the target tag include: the contents of the tag; labeling the video pictures acquired by the secondary image acquisition equipment indicated by the label; labeling abnormal events detected in video pictures acquired by secondary image acquisition equipment indicated by the labels; and labeling the video captured by the secondary image capturing device indicated by the label.
According to some embodiments of the present invention, the preset tag management interface for extracting the focused attention content of all the target tags and presenting the focused attention content of the tags includes: presenting a label management interface of the abnormal event list; a picture-in-picture interface presenting the video picture or video recording video of the secondary image acquisition equipment indicated by the label; a multi-video playback interface presenting a plurality of picture-in-picture interfaces; the secondary control interface is a control interface extending the control behavior, the purpose of the secondary control interface is to control the selected label and provide at least one trigger for controlling the label.
In a second aspect, an embodiment of the present invention provides an apparatus for interacting a tag, including: the tag management interface generation module can receive a first input command and generate a tag management interface in response to the first input command; the tag data acquisition module can acquire tag data and display the tag data on the tag management interface according to a preset rule; and the label control module can receive a second input command, respond to the second input command to control the label and display a control result in the video image.
In a third aspect, an embodiment of the present invention provides a computer storage medium including one or more computer instructions that, when executed, implement any of the above-described methods.
The technical scheme of the invention at least has one of the following beneficial effects:
(1) the interaction mode of label control is simple, the use experience and the execution efficiency are high, and the efficiency and the use experience of label operation can be improved.
(2) When the labels on the video frame images are managed and maintained, the user can be helped to search and manage the labels more quickly, and when the image acquisition equipment is deviated due to an invalidity factor, the labels can be recalibrated more quickly.
(3) In the aspect of display, the filtering display of the labels can be provided through batch hidden display logic, and better interaction and display effects are provided.
Drawings
Fig. 1 is a schematic flowchart of a tag interaction method according to an embodiment of the present invention;
fig. 2 is a schematic view of a display interface of a tag management interface of a tag interaction method according to an embodiment of the present invention;
fig. 3 is a schematic view of a display interface of a tag management interface of a tag interaction method according to another embodiment of the present invention;
fig. 4 is a schematic view of a display interface of a tag management interface of a tag interaction method according to another embodiment of the present invention;
FIG. 5 is a schematic view of an interaction means for a tag according to an embodiment of the present invention;
fig. 6 is a schematic diagram of an electronic device according to an embodiment of the invention.
Reference numerals:
a tag interaction method 100;
a tag's interaction means 200;
a tag management interface generation module 210; a tag data acquisition module 220; a tag control module 230;
an electronic device 300;
a memory 310; an operating system 311; an application 312;
a processor 320; a network interface 330; an input device 340; a hard disk 350; a display device 360.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In the description of the present invention, it is to be understood that the terms "central," "longitudinal," "lateral," "length," "width," "thickness," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," "clockwise," "counterclockwise," "axial," "radial," "circumferential," and the like are used in the orientations and positional relationships indicated in the drawings for convenience in describing the invention and to simplify the description, and are not intended to indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and are therefore not to be considered limiting of the invention. Furthermore, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless otherwise specified.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
An interaction method 100 of tags according to an embodiment of the present invention is described in detail below with reference to the accompanying drawings.
As shown in fig. 1, according to an embodiment of the present invention, a tag interaction method 100, the method 100 includes: receiving a first input command, and generating a label management interface in response to the first input command; acquiring label data, and displaying the label data on the label management interface according to a preset rule; and receiving a second input command, responding to the second input command to control the label, and displaying a control result in the video image.
In other words, according to the interaction method 100 of the tag of the embodiment of the present invention, the method 100 includes: receiving a first input command of a user, opening a label management interface for summarizing all labels in image acquisition equipment, controlling at least one label in the label management interface to acquire label data, and displaying the label data on the label management interface according to a preset rule; and receiving a second input command, controlling one or more labels according to the controlled target, and displaying the control result in the video image.
Therefore, according to the tag interaction method 100 of the embodiment of the present invention, the tag control interaction mode is simple, the use experience and the execution efficiency are high, and the tag operation efficiency and the use experience can be improved.
The steps of the tag interaction method 100 according to an embodiment of the present invention are described in detail below.
First, according to an embodiment of the present invention, the generating a tag management interface specifically includes: and acquiring a label set of the image acquisition equipment, and generating the label management interface, wherein the label management interface comprises the label set of the image acquisition equipment.
That is, before the tag management interface is opened, the step of receiving the first input command of the user to open the tag management interface includes: triggering according to an input instruction of a user; and acquiring a label set of the image acquisition equipment, and opening the user interface for managing the label.
The processing object in the embodiment of the present invention is a tag, and the embodiment of the present invention may be applied to tags in different display forms superimposed on video frames of different image capturing devices.
Optionally, in some embodiments of the present invention, the first input command is triggered by a user clicking a specific area, or by a user using a preset key or a combination of keys of a keyboard, or by a user inputting an instruction using other preset non-conventional keyboard and mouse input devices, including but not limited to a joystick, a touch screen, and the like.
In other words, the first input command may be input in various ways, such as mouse clicking and touch clicking of a touch screen, and a specific trigger area is intuitively clicked to open the tag management interface; or through keyboard input, the user triggers the opening of the label management interface by pressing a designated key or key combination; or under the special label maintenance scene, the label management interface is ensured to be kept open in a timer mode; the user can also select at least one label on the interface in a frame selection or other gesture mode to automatically trigger the interface to be opened.
In some embodiments of the invention, the acquired tag set of the image capture device is from tag data stored in a storage device.
That is, the tag data may be from a database, a long-term storage device such as a hard disk, a magnetic disk, and an optical disk, or may be from a temporary storage device such as a memory and a network carrier. The data range may be a full label set including all labels on the video image of the image capturing device that can be accessed, may be the image capturing device currently playing the video image, may also be a subset of the full label after filtering according to a predefined rule, such as an account number authority, or may be a partial label after being manually selected by the user, and the like, and is not limited specifically.
According to one embodiment of the invention, the preset rule comprises a grouping rule, and the grouping rule is a police tag type, a social facility tag type, a shape type of the tag or a collection device to which the tag belongs. In other words, the preset rule in the step of displaying the tag data on the tag management interface according to the preset rule in the embodiment of the present invention may be a type of a tag, where the type of the tag includes a social facility type, a police type, a tag appearance, and an image capturing device.
The display of the label data can simply list the basic information of all labels, and can also display all labels in a graphic mode according to the appearance of the labels. The labels can be grouped according to a preset grouping rule, for example, the labels are grouped according to label types, and the labels are classified into police labels such as police officer labels and police car labels; or social facility tag types such as hospitals, schools, banks, etc. (when grouping is performed by using such grouping rules, the grouping type of the tag needs to be specified when creating the tag). Or the labels are grouped according to the shape types of the labels and are classified into label types such as arrow labels, vector labels, area labels, polygon labels and the like. Or the labels are grouped into label groups such as the image acquisition equipment 1 and the image acquisition equipment 2 according to the image acquisition equipment group to which the label belongs. When presenting, only the group may be presented to make the interface more concise, and the group and all elements in the group may also be presented to provide detailed information, and the specific presentation manner is not limited.
Wherein the image capturing device can capture panoramic images. Optionally, the image capture device is an AR augmented reality image capture device.
In addition to the presentation of the labels and label groupings, the label management user interface may also display control trigger controls for the labels. For example, a check box or a toggle button is provided beside a control for presenting a tag grouping to control whether all tags in the grouping are presented in a video picture, and the check box is further described, that is, when the check box is checked, all tags in the control grouping are ensured to be presented in the picture if the tags in the control grouping can be presented in the video picture, and if the check box is canceled, all tags in the control grouping are not presented in the video picture if the tags in the control grouping can be presented in the video picture; for another example, a button is provided beside a control for presenting a label packet, and a secondary control menu is expanded after the button is clicked; for example, no additional trigger is presented, the control presenting the label packet itself is used as an input control, presentation of label content is triggered when interaction such as clicking is received, and the like.
In addition to the presentation of the labels and label groupings, the label management user interface may also display only the results after the operation. For example, an un-interactive state display control is provided beside a control for presenting a label or label grouping, and after label grouping control is completed in an input mode of indirectly operating a target object, the state display control displays corresponding control start or stop. It will be appreciated that the tab management user interface provides a consolidated display of the tabs, providing a convenient entry for interaction, but interaction need not necessarily be initiated and processed from this interface.
Meanwhile, the display interface of the tag data can support data query functions such as searching, sorting and filtering in order to further provide a better interaction mode.
The search can support fuzzy matching or precise matching, for example, when searching for a fuzzy search of a tag with 139 in any content, the police officer tag with 139 in the serial number can be searched out, and the police officer tag with 139 in the telephone number can also be searched out; when the phone number is exactly matched with the police officer tag 139, the police officer tag with the telephone number 139 or the duty point tag with the contact number 139 cannot be searched, and the specific searching mode is not limited.
The sorting can be performed according to various characteristics of the tag packets, for example, the sorting is performed in a reverse order according to the number of tags in the same packet, wherein the packet a has 39 tags arranged at the top, and the packet B has 20 tags arranged at the second; or sorting according to the specific character item of the tag content in the group or without grouping, sorting according to the ascending order of the police officer number, wherein the tag a police officer number 117 in the police officer tag grouping is arranged behind the tag B of the police officer number 112, and the like, and the specific sorting mode is not limited.
Filtering may be based on tag groupings, such as groupings that do not display police cars; the filtering may also be performed according to a condition set by the tag content or using a preset condition, for example, when the tag with 139 in the contact number is not displayed, all tags with the character are filtered, and the specific manner of filtering is not limited.
In some embodiments of the present invention, the step of obtaining the tag data and displaying the tag data on the tag management interface according to a preset rule includes receiving a third input command, and changing a display style of the tag data on the tag management interface according to the third input command; wherein the third input command comprises: grouping commands, wherein the label data are grouped and displayed according to the grouping commands; or ordering commands, ordering the tag data according to the ordering commands; or the search command is used for searching the label data according to the name, the characters or the serial number.
That is, in order to visually present a plurality of tags, the tag management interface may include a variety of convenience functions that optimize presentation of data, including: grouping according to preset label types; a function of sorting according to contents such as names and serial numbers; a function of searching according to the label content; and the image acquisition equipment performs grouping according to the label.
According to an embodiment of the present invention, the step of operating the target object by the tag management interface includes: selecting one or more target objects; control of the target object is performed according to the input.
Optionally, the target object comprises a control for presenting label grouping and a control for referring to a label; the label grouping comprises the classification from the label type, the label summarization generated by user selection and the label summarization generated by user search; including but not limited to mouse single clicks, double clicks, keyboard shortcuts, touch screen clicks, etc.
That is, the tag management user interface may provide a function of selecting provided contents, and the user may select one to a plurality of tags or tag groups as target objects through a standard input device such as a keyboard, a mouse, or a touch screen. The input device is merely an example and is not particularly limited.
The tag management interface may provide at least one tag control input control for the target object, and may respectively trigger different tag control methods, for example, the input control may be a button, and the user may click and trigger the input control using an input device such as a keyboard, a mouse, or a touch screen to achieve control; for example, the user can use an input device to check or uncheck the check box to achieve control; such as a drag bar, the user may use the input device to drag the slider to achieve control.
Or as illustrated in fig. 2, a page with tags grouped by device is expanded, and control over the tag grouping and the tags is achieved by clicking on the grouped headers or the tag presentation controls within the grouping.
Or as shown in fig. 3, the expansion menu lists the tag groups or tag types in the first-level expansion menu, lists the tags belonging to the first-level menu items in the second-level expansion menu, and then directly clicks the first-level content in the expansion menu or directly clicks the second-level content to achieve control.
Alternatively, as shown in fig. 4, after the user selects the label, the label management interface pops up, and a plurality of buttons are displayed in parallel at the bottom end to provide a set of controls for one or more labels selected by the user, and the user achieves the controls by clicking the buttons.
The display form, display position and interaction mode of the input control are not particularly limited, and the control is used for controlling the label grouping or the labels or providing a control interface for further control.
The label management interface does not need to provide an input way, and a user can directly achieve the control of specific label grouping and labels through keyboard shortcut keys and the like without indicating an input mode of an interaction target.
According to an embodiment of the present invention, the control action on the tag includes: when the target of the second input command is a label and the image acquisition equipment is a ball machine, rotating the ball machine to which the label belongs to an angle capable of presenting the label; or when the target of the second input command is a label, enabling the label on the video image to present a preset selected display state; or when the second input command is targeted to a tag packet or a tag, hiding a part of the tag on the video image; or when the target of the second input command is a label group or a label, enabling a part of the label hidden on the video image to be presented on the image; or when the second input command is targeted to a tag packet or tag, causing the tag to be removed from the storage device and the video image; or when the target of the second input command is a tag group or a tag, calibrating the tag position according to a preset movement mode; or when the target of the second input command is a label group or a label, extracting the key attention content of all target labels and presenting the key attention content in a preset label management interface; or when the target of the second input command is a label group or a label, opening a secondary control interface for control.
Specifically, when the user triggers control of the target object, a video frame picture or other additional information picture may present the result of the control.
The target object is the label set summarized in at least one label or at least one label group selected in the label management interface.
If the target object triggered by the user is the label of the image acquisition device currently playing the video picture, the user can intuitively observe the result of the control on the label.
According to one embodiment of the invention, the user may accomplish filtering of the target object by checking or deselecting a check box presented next to the target object representing a display and hide control of the target object. After the reverse selection, the tag belonging to the target object and appearing in the video picture is hidden, or when the tag is selected, the tag of the hidden target object in the video picture is redisplayed.
In some embodiments of the present invention, a user may achieve transparency control for a label belonging to a target object by dragging a drag bar presented beside the target object that represents transparency control for the target object. After the control is executed, the transparency of the label of the presented target object in the video picture shows a change.
Wherein, the display mode can include but is not limited to: fade-out, fade-in, fly-out, fly-in, flicker, tremble, louver, etc.
It is to be understood that not necessarily all such advantages may be achieved in accordance with any particular product or process practiced in accordance with the invention.
Further, the preset moving pattern includes: clockwise or anticlockwise rotation movement of a reference system is formed by a ball machine and a ground plane; translating by taking the video picture as a reference system; to indicate translation of the north reference frame.
According to an embodiment of the present invention, the focused attention content of the target tag includes: the contents of the tag; labeling the video pictures acquired by the secondary image acquisition equipment indicated by the label; labeling abnormal events detected in video pictures acquired by secondary image acquisition equipment indicated by the labels; and labeling the video captured by the secondary image capturing device indicated by the label.
Optionally, the preset tag management interface for extracting the focused attention content of all the target tags and presenting the focused attention content of the tags includes: presenting a label management interface of the abnormal event list; a picture-in-picture interface presenting the video picture or video recording video of the secondary image acquisition equipment indicated by the label; a multi-video playback interface presenting a plurality of picture-in-picture interfaces; the secondary control interface is a control interface extending the control behavior, the purpose of the secondary control interface is to control the selected label and provide at least one trigger for controlling the label.
Therefore, according to the interactive method 100 for tags, when managing and maintaining tags on video frame images, a user can be helped to search and manage tags more quickly, and when image acquisition equipment is deviated due to force-ineligibility factors, the tags can be recalibrated more quickly; meanwhile, on the display, the filtering display of the labels can be provided through batch hidden display logic, and better interaction and display effects are provided.
As shown in fig. 5, an apparatus 200 for interacting tags according to an embodiment of the present invention, the apparatus 200 comprising: a tag management interface generating module 210, a tag data acquiring module 220 and a tag control module 230.
Specifically, the tag management interface generating module 210 may receive a first input command, generate a tag management interface in response to the first input command, the tag data obtaining module 220 may obtain tag data, and display the tag data on the tag management interface according to a preset rule, and the tag control module 230 may receive a second input command, control a tag in response to the second input command, and display a control result in a video image.
Therefore, according to the tag interaction device 200 of the embodiment of the present invention, when managing and maintaining tags on video frame images, a user can be helped to search and manage tags more quickly, and when an image capture device is shifted due to an inefficacy factor, the tags can be recalibrated more quickly; meanwhile, on the display, the filtering display of the labels can be provided through batch hidden display logic, and better interaction and display effects are provided.
In addition, an embodiment of the present invention further provides a computer storage medium, where the computer storage medium includes one or more computer instructions, and when executed, the one or more computer instructions implement any of the above tag interaction method 100.
That is, the computer storage medium stores a computer program that, when executed by a processor, causes the processor to perform any of the tag interaction methods 100 described above.
As shown in fig. 6, an embodiment of the present invention provides an electronic device 300, which includes a memory 310 and a processor 320, where the memory 310 is used for storing one or more computer instructions, and the processor 320 is used for calling and executing the one or more computer instructions, so as to implement any one of the methods 100 described above.
That is, the electronic device 300 includes: a processor 320 and a memory 310, in which memory 310 computer program instructions are stored, wherein the computer program instructions, when executed by the processor, cause the processor 320 to perform any of the methods 100 described above.
Further, as shown in fig. 6, the electronic device 300 further includes a network interface 330, an input device 340, a hard disk 350, and a display device 360.
The various interfaces and devices described above may be interconnected by a bus architecture. A bus architecture may be any architecture that may include any number of interconnected buses and bridges. Various circuits of one or more Central Processing Units (CPUs), represented in particular by processor 320, and one or more memories, represented by memory 310, are coupled together. The bus architecture may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like. It will be appreciated that a bus architecture is used to enable communications among the components. The bus architecture includes a power bus, a control bus, and a status signal bus, in addition to a data bus, all of which are well known in the art and therefore will not be described in detail herein.
The network interface 330 may be connected to a network (e.g., the internet, a local area network, etc.), and may obtain relevant data from the network and store the relevant data in the hard disk 350.
The input device 340 may receive various commands input by an operator and send the commands to the processor 320 for execution. The input device 340 may include a keyboard or a pointing device (e.g., a mouse, a trackball, a touch pad, a touch screen, or the like).
The display device 360 may display the result of the instructions executed by the processor 320.
The memory 310 is used for storing programs and data necessary for operating the operating system, and data such as intermediate results in the calculation process of the processor 320.
It will be appreciated that memory 310 in embodiments of the invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. The memory 310 of the apparatus and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
In some embodiments, memory 310 stores the following elements, executable modules or data structures, or a subset thereof, or an expanded set thereof: an operating system 311 and application programs 312.
The operating system 311 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application programs 312 include various application programs, such as a Browser (Browser), and are used for implementing various application services. A program implementing methods of embodiments of the present invention may be included in application 312.
The processor 320, when invoking and executing the application program and data stored in the memory 310, specifically, the application program or the instructions stored in the application program 312, dispersedly sends one of the first set and the second set to the node distributed by the other one of the first set and the second set, where the other one is dispersedly stored in at least two nodes; and performing intersection processing in a node-by-node manner according to the node distribution of the first set and the node distribution of the second set.
The method disclosed by the above embodiment of the present invention can be applied to the processor 320, or implemented by the processor 320. Processor 320 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 320. The processor 320 may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof, and may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present invention. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 310, and the processor 320 reads the information in the memory 310 and completes the steps of the method in combination with the hardware.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
In particular, the processor 320 is also configured to read the computer program and execute any of the methods described above.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be physically included alone, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) to execute some steps of the transceiving method according to various embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (11)

1. A tag interaction method, the method comprising:
receiving a first input command, and generating a label management interface in response to the first input command;
acquiring label data, and displaying the label data on the label management interface according to a preset rule;
receiving a second input command, responding to the second input command to control the label, and displaying a control result in the video image;
the controlling of the tag comprises:
when the target of the second input command is a label and the image acquisition equipment is a ball machine, rotating the ball machine to which the label belongs to an angle capable of presenting the label; or
When the target of the second input command is a label, enabling the label on the video image to present a preset selected display state; or
When the second input command is targeted to a tag packet or a tag, hiding a part of the tag on the video image; or
When the target of the second input command is a label packet or a label, enabling a part of the label hidden on the video image to be presented on the image; or
When the second input command is targeted to a tag packet or tag, causing the tag to be removed from the storage device and the video image; or
When the target of the second input command is a tag group or a tag, calibrating the tag position according to a preset movement mode; or
When the target of the second input command is a label group or a label, extracting the key attention content of all target labels and presenting the key attention content in a preset label management interface; or
And when the target of the second input command is a label group or a label, opening a secondary control interface for control.
2. The method of claim 1, wherein the generating a tag management interface specifically comprises:
and acquiring a label set of the image acquisition equipment, and generating the label management interface, wherein the label management interface comprises the label set of the image acquisition equipment.
3. The method of claim 2, wherein the first input command is a user clicking a specific area, or a user using a keyboard preset key or key combination trigger, or a user using other preset non-conventional keyboard and mouse input devices including but not limited to a joystick and a touch screen to input an instruction trigger.
4. The method of claim 2, wherein the acquired tag set of the image capture device is from tag data stored in a storage device.
5. The method of claim 1, wherein the preset rules comprise:
and grouping rules, wherein the grouping rules are the types of the police tags, the types of the social facility tags, the shapes of the tags or the acquisition equipment to which the tags belong.
6. The method according to claim 1, wherein the step of obtaining the tag data and displaying the tag data on the tag management interface according to a preset rule comprises:
receiving a third input command, and changing the display style of the label data on the label management interface according to the third input command;
wherein the third input command comprises:
grouping commands, wherein the label data are grouped and displayed according to the grouping commands;
or ordering commands, ordering the tag data according to the ordering commands;
or the search command is used for searching the label data according to the name, the characters or the serial number.
7. The method of claim 1, wherein the preset movement pattern comprises:
clockwise or anticlockwise rotation movement of a reference system is formed by a ball machine and a ground plane;
translating by taking the video picture as a reference system;
to indicate translation of the north reference frame.
8. The method of claim 1, wherein the focus of the target tag comprises:
the contents of the tag;
labeling the video pictures acquired by the secondary image acquisition equipment indicated by the label;
labeling abnormal events detected in video pictures acquired by secondary image acquisition equipment indicated by the labels;
and labeling the video captured by the secondary image capturing device indicated by the label.
9. The method according to claim 1, wherein the extracting of the focus attention content of all the target tags and the presenting of the focus attention content in a preset tag management interface specifically comprises:
presenting a label management interface of the abnormal event list;
a picture-in-picture interface presenting the video picture or video recording video of the secondary image acquisition equipment indicated by the label;
a multi-video playback interface presenting a plurality of picture-in-picture interfaces;
the secondary control interface is a control interface extending the control behavior, the purpose of the secondary control interface is to control the selected label and provide at least one trigger for controlling the label.
10. An apparatus for tag interaction, the apparatus comprising:
the tag management interface generation module can receive a first input command and generate a tag management interface in response to the first input command;
the tag data acquisition module can acquire tag data and display the tag data on the tag management interface according to a preset rule;
the label control module can receive a second input command, respond to the second input command to control a label and display a control result in a video image;
the control of the label by the label control module comprises the following steps:
when the target of the second input command is a label and the image acquisition equipment is a ball machine, rotating the ball machine to which the label belongs to an angle capable of presenting the label; or
When the target of the second input command is a label, enabling the label on the video image to present a preset selected display state; or
When the second input command is targeted to a tag packet or a tag, hiding a part of the tag on the video image; or
When the target of the second input command is a label packet or a label, enabling a part of the label hidden on the video image to be presented on the image; or
When the second input command is targeted to a tag packet or tag, causing the tag to be removed from the storage device and the video image; or
When the target of the second input command is a tag group or a tag, calibrating the tag position according to a preset movement mode; or
When the target of the second input command is a label group or a label, extracting the key attention content of all target labels and presenting the key attention content in a preset label management interface; or
And when the target of the second input command is a label group or a label, opening a secondary control interface for control.
11. A computer storage medium comprising one or more computer instructions which, when executed by a processor, perform the method of any of claims 1-9.
CN201910517714.2A 2019-06-14 2019-06-14 Label interaction method and device and computer storage medium Active CN110308848B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910517714.2A CN110308848B (en) 2019-06-14 2019-06-14 Label interaction method and device and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910517714.2A CN110308848B (en) 2019-06-14 2019-06-14 Label interaction method and device and computer storage medium

Publications (2)

Publication Number Publication Date
CN110308848A CN110308848A (en) 2019-10-08
CN110308848B true CN110308848B (en) 2021-03-16

Family

ID=68076042

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910517714.2A Active CN110308848B (en) 2019-06-14 2019-06-14 Label interaction method and device and computer storage medium

Country Status (1)

Country Link
CN (1) CN110308848B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113721822A (en) * 2021-09-14 2021-11-30 中国银行股份有限公司 Label display method and device
CN113901535A (en) * 2021-10-12 2022-01-07 广联达科技股份有限公司 Label adsorption method and device, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6697564B1 (en) * 2000-03-03 2004-02-24 Siemens Corporate Research, Inc. Method and system for video browsing and editing by employing audio
CN1694515A (en) * 1999-09-20 2005-11-09 提维股份有限公司 Closed caption tagging system
CN103188566A (en) * 2011-12-28 2013-07-03 宏碁股份有限公司 Video playing device and operation method thereof

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4270118B2 (en) * 2004-11-30 2009-05-27 日本電信電話株式会社 Semantic label assigning method, apparatus and program for video scene
WO2010044780A1 (en) * 2008-10-14 2010-04-22 Hewlett-Packard Development Company, L.P. Dynamic content sorting using tags
US20160085429A1 (en) * 2014-09-23 2016-03-24 Exacttarget, Inc. Beacon management
CN107704159B (en) * 2017-11-15 2020-06-26 维沃移动通信有限公司 Application icon management method and mobile terminal
CN108897474A (en) * 2018-05-29 2018-11-27 高新兴科技集团股份有限公司 A kind of management method and management system of the virtual label of augmented reality video camera

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1694515A (en) * 1999-09-20 2005-11-09 提维股份有限公司 Closed caption tagging system
US6697564B1 (en) * 2000-03-03 2004-02-24 Siemens Corporate Research, Inc. Method and system for video browsing and editing by employing audio
CN103188566A (en) * 2011-12-28 2013-07-03 宏碁股份有限公司 Video playing device and operation method thereof

Also Published As

Publication number Publication date
CN110308848A (en) 2019-10-08

Similar Documents

Publication Publication Date Title
JP4380494B2 (en) Content management system, content management method, and computer program
US8456488B2 (en) Displaying digital images using groups, stacks, and version sets
US20070002077A1 (en) Methods and System for Providing Information Services Related to Visual Imagery Using Cameraphones
US7557818B1 (en) Viewing digital images using a floating controller
CN111314759B (en) Video processing method and device, electronic equipment and storage medium
US11894021B2 (en) Data processing method and system, storage medium, and computing device
US20130254662A1 (en) Systems and methods for providing access to media content
US11218639B1 (en) Mobile interface for marking and organizing images
CN112887794B (en) Video editing method and device
CN110308848B (en) Label interaction method and device and computer storage medium
JP2012064297A (en) Content file classification device and content file classification method
CN109445668B (en) Screen-locked magazine display method and device, storage medium and mobile terminal
TWI483173B (en) Systems and methods for providing access to media content
CN110855557A (en) Video sharing method and device and storage medium
WO2024078330A1 (en) Content presentation method and apparatus, device, and storage medium
KR101768914B1 (en) Geo-tagging method, geo-tagging apparatus and storage medium storing a program performing the method
WO2024099201A1 (en) User interaction method and apparatus, device and storage medium
Chiang et al. Quick browsing and retrieval for surveillance videos
CN112698775A (en) Image display method and device and electronic equipment
CN117194697A (en) Label generation method and device and electronic equipment
EP4343579A1 (en) Information replay method and apparatus, electronic device, computer storage medium, and product
WO2023005899A1 (en) Graphic identifier display method and electronic device
CN112988810B (en) Information searching method, device and equipment
WO2024083017A1 (en) Content presentation method and apparatus, device, and storage medium
US20220100457A1 (en) Information processing apparatus, information processing system, and non-transitory computer-executable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant