CN110554822A - Label adding and deleting method and computer storage medium - Google Patents

Label adding and deleting method and computer storage medium Download PDF

Info

Publication number
CN110554822A
CN110554822A CN201910791423.2A CN201910791423A CN110554822A CN 110554822 A CN110554822 A CN 110554822A CN 201910791423 A CN201910791423 A CN 201910791423A CN 110554822 A CN110554822 A CN 110554822A
Authority
CN
China
Prior art keywords
label
camera
tag
scanning range
selection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910791423.2A
Other languages
Chinese (zh)
Inventor
李均贺
王博
侯玉清
刘双广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gosuncn Technology Group Co Ltd
Original Assignee
Gosuncn Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gosuncn Technology Group Co Ltd filed Critical Gosuncn Technology Group Co Ltd
Priority to CN201910791423.2A priority Critical patent/CN110554822A/en
Publication of CN110554822A publication Critical patent/CN110554822A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/787Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

the invention provides a label adding and deleting method and a computer storage medium, wherein the label adding and deleting method comprises the following steps: s1, receiving a first selection in the AR information video picture to add a label or delete the label; s2, receiving the scanning range of the added label or the deleted label according to the first selection; s3, acquiring a camera falling in the scanning range, and displaying the camera in a display interface; s4, selecting the label data of the camera needing to generate the camera label and the presentation mode of the label data; and S5, responding to the first selection, generating a camera label and displaying or deleting the camera label on the video picture. According to the label adding and deleting method provided by the embodiment of the invention, the manual workload can be reduced, and the working efficiency and the accuracy can be improved.

Description

Label adding and deleting method and computer storage medium
Technical Field
The invention relates to the technical field of video monitoring, in particular to a label adding and deleting method and a computer storage medium.
background
At present, image acquisition equipment is arranged in a plurality of scenes, and related personnel can monitor the scenes through video frame images acquired by the equipment. Generally, when a video frame image is shown, the display content only includes the image itself and the current time. For a user watching a video frame image, all or part of cameras near the current image acquisition equipment are landed on the current video frame image in a label mode, and the cameras can only be manually marked one by one. Therefore, the mode of adding the labels is complicated, the workload is high, errors are easy to occur, and when the labels are added, if the layout is unreasonable, the display effect can be influenced.
Disclosure of Invention
In view of this, the present invention provides a method for adding and deleting a tag, which can improve accuracy and reduce labor cost.
In order to solve the above technical problem, in one aspect, the present invention provides a method for adding and deleting a tag, including the following steps: s1, receiving a first selection in the AR information video picture to add a label or delete the label; s2, receiving the scanning range of the added label or the deleted label according to the first selection; s3, acquiring the camera falling in the scanning range, and displaying the camera in a display interface; s4, selecting the label data of the camera and the presentation mode of the label data, wherein the label data need to generate a camera label; and S5, responding to the first selection, generating a camera label and displaying or deleting the camera label on a video picture.
According to the method for adding and deleting the labels, disclosed by the embodiment of the invention, by receiving the first selection in the video picture of the AR information, the monitoring system can automatically calculate all cameras which meet the conditions, and the label information in the selected cameras is added to the picture of the monitoring system according to the selection of a user, so that the manual workload is reduced, and the working efficiency and the accuracy are improved.
According to some embodiments of the present invention, in step S1, a tab adding or deleting control is set in the AR information video frame, and a corresponding control is selected by a mouse to add or delete a tab.
According to some embodiments of the present invention, in step S2, the scan range is formed by selecting a predetermined area in the video picture.
According to some embodiments of the present invention, in step S2, the scan range is formed by inputting a distance parameter through a configuration interface, and the cameras in the distance range are obtained according to the distance parameter.
According to some embodiments of the invention, step S3 includes: s31, determining whether the camera falls into the scanning range by acquiring the range of the actual physical space and the corresponding distance; and S32, acquiring the cameras falling into the scanning range, and displaying the cameras meeting the conditions in a list form.
According to some embodiments of the invention, step S31 includes: and sending a data request command to all cameras associated with the scanning range to acquire the GPS coordinate information of the cameras.
According to some embodiments of the invention, step S31 includes: and acquiring the GPS coordinate information of the camera in the scanning range from a database or a file.
According to some embodiments of the present invention, when the first selection is tagging, in step S4, a display position of the camera tag in the video frame picture is calculated according to the GPS information of the current video frame picture and the GPS information of the camera, and the camera tag is generated according to a name of the camera and displayed in the video frame picture.
According to some embodiments of the invention, when the first selection is to delete a tag, in step S4, the tag associated with the selected camera is deleted.
In a second aspect, embodiments of the present invention provide a computer storage medium comprising one or more computer instructions that, when executed, implement any of the methods described above.
Drawings
FIG. 1 is a flow diagram of a tag addition and deletion method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an add-on camera according to a tag add-on and delete method in an embodiment of the invention;
FIG. 3 is a schematic diagram of a method for adding and deleting tags according to an embodiment of the present invention;
Fig. 4 is a schematic diagram of an electronic device according to an embodiment of the invention.
Reference numerals:
Tag addition and deletion method 100;
an electronic device 300;
A memory 310; an operating system 311; an application 312;
A processor 320; a network interface 330; an input device 340; a hard disk 350; a display device 360.
Detailed Description
The following detailed description of embodiments of the present invention will be made with reference to the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
Specifically, as shown in fig. 1 to 3, a tag addition and deletion method 100 according to an embodiment of the present invention includes the following steps: s1, receiving a first selection in the AR information video picture to add a label or delete the label; s2, receiving the scanning range of the added label or the deleted label according to the first selection; s3, acquiring the camera falling in the scanning range, and displaying the camera in a display interface; s4, selecting the label data of the camera and the presentation mode of the label data, wherein the label data need to generate a camera label; and S5, responding to the first selection, generating a camera label and displaying or deleting the camera label on a video picture.
In other words, when a tag is added or deleted in a video frame image, first, a first selection of the AR information in a video frame is received, then, according to the first selection, a scanning range of adding or deleting the tag is received, a camera falling in the scanning range is acquired, and the camera is presented in a display interface, then, data of the camera tag to be generated is selected, and when the camera is selected, a manner of presenting the camera tag can be designated. And finally, in response to the received selection of the cameras, when the first selection is a deletion command, deleting the labels associated with the cameras selected by the user, reducing the human errors existing in the image position of the current video frame, and when the first selection is an addition command, quickly adding other cameras to the video frame in the form of labels. The user can add or delete labels in batches in the appointed area as required, the condition that the user needs to add or delete labels one by one is avoided, the workload is reduced, the difference of artificial label positions is reduced, the accuracy rate of the label positions is improved, and the reasonability of label layout is improved.
Therefore, according to the tag adding and deleting method 100 provided by the embodiment of the invention, by receiving the first selection in the video picture of the AR information, the monitoring system can automatically calculate all cameras meeting the conditions, and add the tag information in the selected cameras into the picture of the monitoring system according to the selection of the user, so that the manual workload is reduced, and the working efficiency and the accuracy are improved.
According to an embodiment of the present invention, in step S1, a control for adding a label or deleting a label is provided in the AR information video screen, and a mouse is used to select a corresponding control to add a label or delete a label, so that the operation method is simple.
According to still another embodiment of the present invention, in step S2, the scan range is formed by selecting a predetermined area in the video picture.
Optionally, in step S2, the scanning range is formed by inputting a distance parameter through a configuration interface, and the cameras in the distance range are obtained according to the distance parameter.
that is, the scanning range may be obtained in various ways, for example, by directly selecting an area in the video screen, as shown in fig. 2, directly delimiting the area 1 by a mouse, and calculating the range of the actual physical space according to the coordinate parameters of the area 1, and when the area 1 is a rectangle, calculating the range of the actual physical space according to the parameters of four vertices of the rectangle. In addition, the scanning range can be obtained by inputting the distance, for example, the distance is input through a configuration interface, and then the cameras in the distance range are acquired according to the distance.
In one embodiment of the present invention, step S3 includes: s31, determining whether the camera falls into the scanning range by acquiring the range of the actual physical space and the corresponding distance; and S32, acquiring the cameras falling into the scanning range, and displaying the cameras meeting the conditions in a list form.
Further, step S31 includes: and sending a data request command to all cameras associated with the scanning range to acquire the GPS coordinate information of the cameras.
That is to say, all cameras in the monitoring system can communicate with the monitoring system through a network including but not limited to a wired or wireless manner, the monitoring system sends a data request command to the registered cameras, the cameras return corresponding information to the monitoring system after receiving the corresponding data request command, and the cameras actively return an information format: camera ID, camera name, GPS, tag content, national standard code, etc.
Optionally, step S31 includes: and acquiring the GPS coordinate information of the camera in the scanning range from a database or a file.
Specifically, the monitoring system may store information of each camera, and store the information through data of a database or a file, where the stored information includes: camera ID, camera name, GPS, tag content, national standard code, etc.
After the information of the camera is acquired, the distance between two points is calculated according to the GPS information of the current video frame and the GPS data of other cameras, the camera data falling in the scanning range is screened, or whether the camera falls in the selected area is judged according to the acquired GPS data of the camera. And displaying the cameras meeting the conditions in a list form, and selecting the cameras needing to be added with the labels by a user through the list.
according to an embodiment of the present invention, when the first selection is tagging, in step S4, a display position of the camera tag in the video frame picture is calculated according to the GPS information of the current video frame picture and the GPS information of the camera, and the camera tag is generated according to a name of the camera and displayed in the video frame picture.
Optionally, when the first selection is to delete a tag, in step S4, the tag associated with the selected camera is deleted.
According to one embodiment of the invention, the application scenario: camera a is a high point camera or a panoramic camera that can monitor the entire situation of the area, and the other cameras B, C, D, F are low point cameras that are used to monitor specific areas in the monitored area.
in summary, according to the method 100 for adding and deleting tags in the embodiment of the present invention, through data distance or selecting an area in a monitoring screen, the monitoring system can automatically calculate all cameras that meet the condition, and add the tag information in the selected cameras to the screen of the monitoring system according to the selection of the user, thereby avoiding the need of the user to add or delete tags one by one, and improving the working efficiency.
In addition, an embodiment of the present invention further provides a computer storage medium, where the computer storage medium includes one or more computer instructions, and when executed, the one or more computer instructions implement any of the data processing methods described above.
That is, the computer storage medium stores a computer program that, when executed by a processor, causes the processor to execute any of the data processing methods described above.
As shown in fig. 4, an embodiment of the present invention provides an electronic device 300, which includes a memory 310 and a processor 320, where the memory 310 is used for storing one or more computer instructions, and the processor 320 is used for calling and executing the one or more computer instructions, so as to implement any one of the methods 100 described above.
That is, the electronic device 300 includes: a processor 320 and a memory 310, in which memory 310 computer program instructions are stored, wherein the computer program instructions, when executed by the processor, cause the processor 320 to perform any of the methods 100 described above.
further, as shown in fig. 4, the electronic device 300 further includes a network interface 330, an input device 340, a hard disk 350, and a display device 360.
The various interfaces and devices described above may be interconnected by a bus architecture. A bus architecture may be any architecture that may include any number of interconnected buses and bridges. Various circuits of one or more Central Processing Units (CPUs), represented in particular by processor 320, and one or more memories, represented by memory 310, are coupled together. The bus architecture may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like. It will be appreciated that a bus architecture is used to enable communications among the components. The bus architecture includes a power bus, a control bus, and a status signal bus, in addition to a data bus, all of which are well known in the art and therefore will not be described in detail herein.
The network interface 330 may be connected to a network (e.g., the internet, a local area network, etc.), and may obtain relevant data from the network and store the relevant data in the hard disk 350.
the input device 340 may receive various commands input by an operator and send the commands to the processor 320 for execution. The input device 340 may include a keyboard or a pointing device (e.g., a mouse, a trackball, a touch pad, a touch screen, or the like).
the display device 360 may display the result of the instructions executed by the processor 320.
The memory 310 is used for storing programs and data necessary for operating the operating system, and data such as intermediate results in the calculation process of the processor 320.
it will be appreciated that memory 310 in embodiments of the invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. The memory 310 of the apparatus and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
In some embodiments, memory 310 stores the following elements, executable modules or data structures, or a subset thereof, or an expanded set thereof: an operating system 311 and application programs 312.
The operating system 311 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application programs 312 include various application programs, such as a Browser (Browser), and are used for implementing various application services. A program implementing methods of embodiments of the present invention may be included in application 312.
The processor 320 receives a first selection of adding or deleting a tag in the AR information video picture when calling and executing the application program and data stored in the memory 310, specifically, the application program or the instruction stored in the application program 312; receiving a scanning range for adding or deleting a label according to the first selection; acquiring a camera falling into the scanning range, and displaying the camera in a display interface; selecting label data of the camera and a presentation mode of the label data, wherein the label data need to generate a camera label; and responding to the first selection, generating a camera label and displaying or deleting the camera label on a video picture.
The method disclosed by the above embodiment of the present invention can be applied to the processor 320, or implemented by the processor 320. Processor 320 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 320. The processor 320 may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof, and may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present invention. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 310, and the processor 320 reads the information in the memory 310 and completes the steps of the method in combination with the hardware.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
in particular, the processor 320 is also configured to read the computer program and execute any of the methods described above.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be physically included alone, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) to execute some steps of the transceiving method according to various embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A tag adding and deleting method is characterized by comprising the following steps:
S1, receiving a first selection in the AR information video picture to add a label or delete the label;
S2, receiving the scanning range of the added label or the deleted label according to the first selection;
S3, acquiring the camera falling in the scanning range, and displaying the camera in a display interface;
S4, selecting the label data of the camera and the presentation mode of the label data, wherein the label data need to generate a camera label;
And S5, responding to the first selection, generating a camera label and displaying or deleting the camera label on a video picture.
2. The tag adding and deleting method according to claim 1, wherein in step S1, a tag adding or deleting control is provided in the AR information video screen, and a corresponding control is selected by a mouse to add or delete a tag.
3. The label adding and deleting method according to claim 1, wherein in step S2, the scanning range is formed by selecting a predetermined area in a video screen.
4. The label adding and deleting method according to claim 1, wherein in step S2, the scanning range is formed by inputting a distance parameter through a configuration interface, and the cameras in the distance range are obtained according to the distance parameter.
5. The tag addition and deletion method according to claim 1, wherein step S3 includes:
S31, determining whether the camera falls into the scanning range by acquiring the range of the actual physical space and the corresponding distance;
And S32, acquiring the cameras falling into the scanning range, and displaying the cameras meeting the conditions in a list form.
6. The tag addition and deletion method according to claim 5, wherein step S31 includes: and sending a data request command to all cameras associated with the scanning range to acquire the GPS coordinate information of the cameras.
7. the tag addition and deletion method according to claim 6, wherein step S31 includes: and acquiring the GPS coordinate information of the camera in the scanning range from a database or a file.
8. The method according to claim 1, wherein when the first selection is tagging, in step S4, a display position of the camera tag in the video frame is calculated based on GPS information of the current video frame and GPS information of the camera, and the camera tag is generated based on a name of the camera and displayed in the video frame.
9. The tag addition and deletion method according to claim 1, wherein when the first selection is to delete a tag, in step S4, the tag associated with the selected camera is deleted.
10. A computer storage medium comprising one or more computer instructions which, when executed, implement the method of any one of claims 1-9.
CN201910791423.2A 2019-08-26 2019-08-26 Label adding and deleting method and computer storage medium Pending CN110554822A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910791423.2A CN110554822A (en) 2019-08-26 2019-08-26 Label adding and deleting method and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910791423.2A CN110554822A (en) 2019-08-26 2019-08-26 Label adding and deleting method and computer storage medium

Publications (1)

Publication Number Publication Date
CN110554822A true CN110554822A (en) 2019-12-10

Family

ID=68738181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910791423.2A Pending CN110554822A (en) 2019-08-26 2019-08-26 Label adding and deleting method and computer storage medium

Country Status (1)

Country Link
CN (1) CN110554822A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114596362A (en) * 2022-03-15 2022-06-07 云粒智慧科技有限公司 High-point camera coordinate calculation method and device, electronic equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103841374A (en) * 2012-11-27 2014-06-04 华为技术有限公司 Display method and system for video monitoring image
CN104363430A (en) * 2014-12-04 2015-02-18 高新兴科技集团股份有限公司 Augmented reality camera monitoring method and system thereof
CN108897474A (en) * 2018-05-29 2018-11-27 高新兴科技集团股份有限公司 A kind of management method and management system of the virtual label of augmented reality video camera
CN109344748A (en) * 2018-09-19 2019-02-15 高新兴科技集团股份有限公司 A method of AR label is added in image frame based on monitoring point GPS

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103841374A (en) * 2012-11-27 2014-06-04 华为技术有限公司 Display method and system for video monitoring image
CN104363430A (en) * 2014-12-04 2015-02-18 高新兴科技集团股份有限公司 Augmented reality camera monitoring method and system thereof
CN108897474A (en) * 2018-05-29 2018-11-27 高新兴科技集团股份有限公司 A kind of management method and management system of the virtual label of augmented reality video camera
CN109344748A (en) * 2018-09-19 2019-02-15 高新兴科技集团股份有限公司 A method of AR label is added in image frame based on monitoring point GPS

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李大成等: ""基于增强现实摄像机虚拟标签的设计与管理"", 《现代计算机》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114596362A (en) * 2022-03-15 2022-06-07 云粒智慧科技有限公司 High-point camera coordinate calculation method and device, electronic equipment and medium

Similar Documents

Publication Publication Date Title
US10757374B2 (en) Medical support system
JP2001186334A (en) Device, system and method for picture processing, and storage medium
CN111078910A (en) Medical image storage method, device, system, equipment and storage medium
WO2022143231A1 (en) Method and apparatus for object tracking, electronic device, and system
CN113095995A (en) Webpage watermark adding method and device, electronic equipment and storage medium
CN111144078B (en) Method, device, server and storage medium for determining positions to be marked in PDF (portable document format) file
CN114036438A (en) Page construction method, device, equipment and storage medium
US8074182B2 (en) Work procedure display method and system, production process management method and system, and computer program of the same
CN110020344B (en) Webpage element labeling method and system
CN111800454A (en) Visual data display system and visual page screen projection method
CN109976683B (en) Data printing method, device, equipment and storage medium
CN110554822A (en) Label adding and deleting method and computer storage medium
CN111223155A (en) Image data processing method, image data processing device, computer equipment and storage medium
CN111653330B (en) Medical image display and diagnostic information generation method, system, terminal and medium
US20170052980A1 (en) Information processing system, information processing method, and information processing apparatus
CN111564204A (en) Electronic film generation method and device, computer equipment and storage medium
CN110704321A (en) Program debugging method and device
CN109729316B (en) Method for linking 1+ N cameras and computer storage medium
CN114998768A (en) Intelligent construction site management system and method based on unmanned aerial vehicle
CN111757048A (en) Data center visualization operation and maintenance method and device, computer storage medium and electronic equipment
CN116266482A (en) Equipment software upgrading method and device
CN113140292A (en) Image abnormal area browsing method and device, mobile terminal equipment and storage medium
CN110660463A (en) Report editing method, device and equipment based on ultrasonic system and storage medium
CN109729317B (en) Device for machine linkage of 1+ N cameras
CN111311587A (en) Medical image data processing method, medical image data processing device, medical information system and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191210