CN114143477A - Image annotation method and device - Google Patents

Image annotation method and device Download PDF

Info

Publication number
CN114143477A
CN114143477A CN202111477954.8A CN202111477954A CN114143477A CN 114143477 A CN114143477 A CN 114143477A CN 202111477954 A CN202111477954 A CN 202111477954A CN 114143477 A CN114143477 A CN 114143477A
Authority
CN
China
Prior art keywords
output box
output
elements
sub
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111477954.8A
Other languages
Chinese (zh)
Other versions
CN114143477B (en
Inventor
刘亚伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tricolor Technology Co ltd
Original Assignee
Beijing Tricolor Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tricolor Technology Co ltd filed Critical Beijing Tricolor Technology Co ltd
Priority to CN202111477954.8A priority Critical patent/CN114143477B/en
Publication of CN114143477A publication Critical patent/CN114143477A/en
Application granted granted Critical
Publication of CN114143477B publication Critical patent/CN114143477B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1446Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/20Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image annotation method and device. The method is applied to a screen system, the screen system comprises a spliced screen group consisting of a plurality of spliced screens, an output box corresponding to each spliced screen, and an upper computer, and the method comprises the following steps: the upper computer determines the marking elements and sends the marking elements to each output box; each output box processes the labeled elements to obtain labeled sub-elements of each output box; and each spliced screen respectively displays the label sub-elements of the corresponding output box. In the method, the spliced screen group can be labeled, each spliced screen can respectively display the labeling sub-elements, and each labeling sub-element is part of the labeled content, so that the labeled content can be spliced in a cross-screen mode, and the spliced screen group can completely display the labeled content.

Description

Image annotation method and device
Technical Field
The invention relates to the technical field of video splicing, in particular to an image annotation method and device.
Background
In the technical field of video splicing, a plurality of signal sources are respectively connected with a plurality of splicing boxes (an input box and an output box which are sequentially connected), a plurality of signal source videos are displayed in a large splicing screen through the splicing boxes, and the videos are often marked for better displaying of the video signals in practical application.
However, in the existing large-screen labeling technology, some can only label in a single screen, and labeled contents cannot be spliced across screens, so that the labeled contents cannot be completely displayed.
Disclosure of Invention
In view of this, the present invention provides an image annotation method and apparatus, so as to annotate a group of tiled screens, each tiled screen can respectively display annotation sub-elements, and annotated content can be tiled across screens, so that annotated content can be completely displayed.
In a first aspect, an embodiment of the present invention provides an image annotation method, which is applied to a screen system, where the screen system includes a spliced screen group composed of multiple spliced screens, an output box corresponding to each spliced screen, and an upper computer; the method comprises the following steps: the upper computer determines the marking elements and sends the marking elements to each output box; each output box processes the labeled elements to obtain labeled sub-elements of each output box; and each spliced screen respectively displays the label sub-elements of the corresponding output box.
In a preferred embodiment of the present invention, the annotation element is an original image, the original image is a bitmap, and the image format of the original image is a bitmap format.
In a preferred embodiment of the present invention, the label sub-element is a sub-image; each output box processes the labeled elements to obtain labeled sub-elements of each output box, and the method comprises the following steps: each output box carries out scaling processing on the original image to obtain a target image with the same size as the spliced screen group; and each output box cuts the target image based on the display area of the corresponding spliced screen in the spliced screen group to obtain the subimage of each output box.
In a preferred embodiment of the present invention, the labeled element is a vector coordinate sequence, where the vector coordinate sequence carries a vector type and a vector coordinate.
In a preferred embodiment of the present invention, the step of determining the label element and sending the label element to each output box by the upper computer includes: the upper computer generates a vector coordinate sequence; and the upper computer encodes the vector coordinate sequence into binary data and sends the binary data to each output box.
In a preferred embodiment of the present invention, the sub-element is labeled information of a vector coordinate sequence; each output box processes the labeled elements to obtain labeled sub-elements of each output box, and the method comprises the following steps: each output box decodes the binary data to obtain a vector coordinate sequence; each output box determines labeling information of the vector coordinate sequence covered to the output box.
In a preferred embodiment of the present invention, the step of determining labeling information of the vector coordinate sequence overlaid on each output box separately includes: each output box determines labeling information of the vector coordinate sequence covered to the output box based on the output area of the output box.
In a preferred embodiment of the present invention, the step of displaying the label sub-element of the corresponding output box on each of the tiled display screens includes: each output box draws the labeling information of the vector coordinate sequence covered on the output box on the virtual canvas to obtain the vector drawing labeling information of the output box; wherein a display area of the virtual canvas is larger than a display area of the tiled screen; and each spliced screen displays the corresponding vector drawing marking information of the output box through the virtual canvas.
In a preferred embodiment of the present invention, after the step of drawing, by each output box, the label information of the vector coordinate sequence overlaid on the output box on the virtual canvas to obtain the vector drawing label information of the output box, the method further includes: each output box carries out point-supplementing smoothing processing on the vector drawing and labeling information of the output box; and each spliced screen respectively displays the vector drawing and labeling information of the output box after the point supplementing smoothing processing.
In a second aspect, an embodiment of the present invention further provides an image annotation device, which is applied to a screen system, where the screen system includes a spliced screen group formed by a plurality of spliced screens, an output box corresponding to each spliced screen, and an upper computer; the device comprises: the marking element determining module is used for determining marking elements by the upper computer and sending the marking elements to each output box; the labeling sub-element determining module is used for processing the labeling elements by each output box to obtain the labeling sub-elements of each output box; and the labeling sub-element display module is used for displaying the labeling sub-elements of the corresponding output boxes on each spliced screen.
The embodiment of the invention has the following beneficial effects:
according to the image labeling method and device provided by the embodiment of the invention, the upper computer can send the labeling elements to each output box, each output box processes the labeling elements to obtain labeling sub-elements, and each spliced screen respectively displays the labeling sub-elements of the corresponding output box. In the method, the spliced screen group can be labeled, each spliced screen can respectively display the labeling sub-elements, and each labeling sub-element is part of the labeled content, so that the labeled content can be spliced in a cross-screen mode, and the spliced screen group can completely display the labeled content.
Additional features and advantages of the disclosure will be set forth in the description which follows, or in part may be learned by the practice of the above-described techniques of the disclosure, or may be learned by practice of the disclosure.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic diagram of a connection relationship of a screen system according to an embodiment of the present invention;
FIG. 2 is a flowchart of an image annotation method according to an embodiment of the present invention;
FIG. 3 is a flowchart of another image annotation method according to an embodiment of the present invention;
FIG. 4 is a flowchart of another image annotation method according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a tiled screen set according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of an image annotation apparatus according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of another image annotation device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
At present, in the technical field of video splicing, a plurality of signal sources are respectively connected with a plurality of splicing boxes (an input box and an output box which are sequentially connected), a plurality of signal source videos are displayed in a large splicing screen through the splicing boxes, and the videos are often marked for better displaying the video signals in practical application. However, in the existing large-screen labeling technology, some can only label in a single screen, and labeled contents cannot be spliced across screens, so that the labeled contents cannot be completely displayed.
Based on the above, the embodiment of the invention provides an image annotation method and device, relates to the field of video splicing control, and particularly relates to a large-screen cross-screen annotation method.
For the convenience of understanding the embodiment, a detailed description will be given to an image annotation method disclosed in the embodiment of the present invention.
The first embodiment is as follows:
the embodiment provides an image annotation method which is applied to a screen system, wherein the screen system comprises a spliced screen group consisting of a plurality of spliced screens and an output box corresponding to each spliced screen. Referring to the schematic diagram of the connection relationship of the screen system shown in fig. 1, the screen system can also be called a large screen splicing system, and includes video sources, input boxes, output boxes, splicing screens and an upper computer which are connected in sequence, the output boxes are embedded devices, the output boxes are multiple, each output box is connected with one video source, each output box is connected with one small screen, a plurality of small screens form a splicing screen group, and the splicing screen group is a large splicing screen.
As shown in fig. 1, the spliced screen group in fig. 1 is composed of 4 small screens (small screens 1-4), the 4 small screens correspond to 4 output boxes (output boxes 1-4) respectively, the 4 output boxes are all in communication connection with an upper computer, and a video source is not represented in fig. 1.
Based on the above description, referring to the flowchart of an image annotation method shown in fig. 2, the image annotation method includes the following steps:
and S202, determining the labeling elements by the upper computer, and sending the labeling elements to each output box.
Specifically, the upper computer may be a computer, a notebook computer, a server, or the like, which is capable of performing human-computer interaction and is in communication connection with the output box. The marking elements of the large-screen projection marking can comprise component elements such as points, straight lines, rectangles, curves and texts, and the component elements have attributes such as colors, line thicknesses and font sizes. The method considers the factors such as the quality of the marked image, the format of marked data and the like, and has two implementation schemes of zooming display and point-to-point display.
And step S204, each output box processes the labeling element to obtain the labeling sub-element of each output box.
The output boxes can perform scaling processing, splitting processing, cutting processing, decoding processing and the like on the labeled elements, the labeled sub-elements of each output box are generally different, namely, the labeled elements are the content displayed by splicing and grouping, and the content displayed by the output boxes can be obtained after the labeled elements are processed by each output box. That is, the labeled child element of each output cartridge is part of the content of the labeled element.
And S206, each spliced screen respectively displays the label sub-elements of the corresponding output box.
After each output box determines the label sub-elements, each spliced screen can respectively display the label sub-elements of the corresponding output box, so that the spliced screen group displays the label elements, and the purpose of labeling the spliced screen group is achieved.
According to the image labeling method provided by the embodiment of the invention, the upper computer can send the labeling element to each output box, each output box processes the labeling element to obtain the labeling sub-elements, and each spliced screen respectively displays the labeling sub-elements of the corresponding output box. In the method, the spliced screen group can be labeled, each spliced screen can respectively display the labeling sub-elements, and each labeling sub-element is part of the labeled content, so that the labeled content can be spliced in a cross-screen mode, and the spliced screen group can completely display the labeled content.
Example two:
the embodiment provides another image annotation method, which is implemented on the basis of the above embodiment. Referring to a flowchart of another image annotation method shown in fig. 3, the image annotation method in this embodiment includes the following steps:
and step S302, the upper computer determines an original image and sends the original image to each output box.
The embodiment of the invention provides a scaling display scheme, wherein the marking element of the embodiment of the invention is an original image, the original image is a bitmap, and the image format of the original image is a bitmap format. The user can draw the marked content on the upper computer to generate an original image which can be processed by the lower-layer embedded device, and the image format can be a bitmap format (the bitmap format can be converted into yuv422/yuv420 and the like).
And the upper computer simultaneously transmits the generated original image to each output box, and each output box receives the same complete original image.
And step S304, each output box performs scaling processing on the original image to obtain a target image with the same size as the spliced screen group.
And each output box is zoomed according to the received proportion of the whole original image to the size of the screen group to generate a target image, wherein the target image is the image with the size same as the size of the screen group.
And S306, each output box cuts the target image based on the display area of the corresponding splicing screen in the splicing screen group to obtain the sub-image of each output box.
In the embodiment of the invention, the label sub-elements are sub-images, and each output box cuts out the sub-images covered on the splicing box from the target image according to the display area of the screen group where the output box is positioned.
And step S308, each spliced screen respectively displays the sub-images of the corresponding output boxes.
Each spliced screen can synchronously output and display the sub-images of the corresponding output boxes.
Because the interactive content between the upper computer and the output boxes is a bitmap or yuv422 image file, the data format is simple, the realization is easy, and simultaneously, because each output box is scaled and cut by the whole source image finally generated by the labeled content, the image connection between the splicing screens has no sawtooth or splicing seam problem.
Example three:
the embodiment provides another image annotation method, which is implemented on the basis of the above embodiment. Referring to a flowchart of another image annotation method shown in fig. 4, the image annotation method in this embodiment includes the following steps:
and step S402, the upper computer generates a vector coordinate sequence.
The embodiment of the invention provides a point-to-point display scheme, wherein a labeling element in the embodiment of the invention is a vector coordinate sequence, and the vector coordinate sequence carries a vector type and a vector coordinate.
A user can generate a vector coordinate sequence according to the drawing sequence of each component element by the marking content to be projected on the screen through the upper computer (the vector coordinate is the mouse moving track coordinate value of the user, the coordinate is firstly drawn on the canvas of the upper computer to form the original canvas of the upper computer, the vector coordinate is drawn on the screen group to form the actual output display content of the splicing box), specifically, each vector carries a vector type (such as a point/straight line/ellipse and the like) and the coordinate of the vector on the screen group, and a plurality of vectors form the vector coordinate sequence according to the drawing sequence.
And S404, the upper computer encodes the vector coordinate sequence into binary data and sends the binary data to each output box.
And the upper computer encodes the vector coordinate sequence into binary data, and simultaneously transmits the binary data to each output box in the screen group respectively, and each output box receives the same complete binary data.
And step S406, each output box decodes the binary data to obtain a vector coordinate sequence.
Each output box decodes the received binary data and restores the binary data into a vector coordinate sequence, and each output box draws the label information one by one according to the partial vector coordinate sequence covered on the output box.
Step S408, each output box respectively determines the labeling information of the vector coordinate sequence covered on the output box.
The labeling sub-element in the embodiment of the invention is the labeling information of the vector coordinate sequence. Specifically, each output box determines labeling information of a vector coordinate sequence covered to the output box based on the output area of the output box.
The coordinate parameter carried by each vector is the coordinate of the vector on the screen group, the position coordinate of the output area of each output box is also known after the channel of the screen group is mapped, and the vector coordinate covers the output box when the vector coordinate and the output area of each output box have intersection. According to the intersection of the vector coordinates and the output areas of the output boxes, partial vector coordinates formed by the intersection areas of the vector coordinates and the output areas of the output boxes can be obtained, and the partial vector coordinates are vectors to be labeled through the output boxes.
And step S410, each output box draws the label information of the vector coordinate sequence covered on the output box on the virtual canvas to obtain the vector drawing label information of the output box.
And step S412, each spliced screen displays the vector drawing annotation information of the corresponding output box through the virtual canvas.
The output box uses a frame buffer-based embedded GUI (Graphical User Interface) drawing library (which is a software library) to draw the marking information on an embedded virtual canvas, so as to form the target vector drawing marking information, and the target vector drawing marking information is mapped to the spliced screen through the embedded virtual canvas for display.
In the method, the virtual canvas is mainly used for creating the canvas which is slightly larger than the original image, so that the cross-screen text and the like can be drawn at the spliced positions of the splicing seams to form the complete text, then the original image is cut out, and the problem of splicing the cross-screen text can be solved skillfully by arranging the virtual canvas, so that the complete text is drawn.
In addition, because the cross-screen vector diagram is cut and spliced together after the whole drawing on each small screen is finished, the connection of the images among the boxes can have the problem of saw teeth or splicing seams, so that the output boxes are required to be subjected to point-supplementing smoothing, and the point-supplementing smoothing refers to processing through image software algorithms such as linear interpolation.
Accordingly, the above method further comprises: each output box carries out point-supplementing smoothing processing on the vector drawing and labeling information of the output box; and each spliced screen respectively displays the vector drawing and labeling information of the output box after the point supplementing smoothing processing.
The interactive contents between the upper computer and the splicing box are all geometrical vector coordinate sequence data, the communication data amount is less, and finally, the image displayed on a large screen is displayed after being drawn point to point by a source labeling vector coordinate, so the image quality is higher.
The point-to-point display scheme implementation details are as follows:
firstly, marking a content data structure:
vector diagram sequence because each component ELEMENT is drawn and is required to be ordered, so the vector diagram sequence needs to use the structure of ordered linear linked list, such as list < ELEMENT >:
1. vector map ELEMENT:
Figure BDA0003394252880000091
Figure BDA0003394252880000101
2. POINT element POINT:
Figure BDA0003394252880000102
secondly, a cross-screen vector splicing process:
each output box traverses each vector element in the vector diagram sequence, calculates the coverage area of the vector element in the output box, creates a virtual canvas (virtual _ x0, virtual _ y0, virtual _ x1, and virtual _ y1) larger than the display area of the output box based on the display area (dis _ x0, dis _ y0, dis _ x1, and dis _ y1) of the output box (for example, virtual _ x0 is dis _ x0-4, virtual _ y0 is dis _ y-4, and virtual _ x1 is dis _ x1+4, and virtual _ y1 is dis _ y1+4), draws the geometric vector of the coverage area on the virtual canvas, and displays the virtual canvas on the output box after all the vector elements are covered on the internal memory.
Referring to the schematic diagram of a tiled screen group shown in fig. 5, taking the small screen 5 in the 3 × 3 screen group of fig. 5 as an example, where a thicker line region is an actual display region of the small screen 5, and a thinner line region is a virtual canvas region of the small screen 5, an implementation method of each vector element is as follows:
1. point, straight line, rectangle, etc:
the elements do not relate to the problem of the integrity of cross-screen elements, and the starting point and the end point which fall in the range of the actual display area of the small screen are calculated and drawn.
2. Text element:
the elements relate to the integrity of cross-screen elements, the area occupied by one complete element is small, and the virtual canvas area can completely cover the complete element, so that characters exceeding the boundary of the actual display area of the small screen are completely drawn between the boundary of the display area of the small screen and the boundary of the virtual canvas area, and the part exceeding the boundary of the display area of the small screen is cut off during final display.
3. Ellipse elements:
such elements involve the problem of cross-screen element integrity, and the area occupied by a complete element may be large, and the virtual canvas area cannot completely cover the complete element, requiring the upper computer to convert the elliptical elements into a dot sequence, and the output boxes are drawn out one by one through dots.
Thirdly, an embedded GUI drawing library based on the framebuffer:
QT (application development framework): the drawing API (Application Programming Interface) is rich; but the method needs to be operated in an independent process mode, occupies system resources relatively, and needs to design a communication protocol between the QT process and the application process.
SDL2(Simple DirectMedia Layer, open source cross-platform multimedia development library): only some simple geometric vectors such as points, lines, rectangles and texts are supported, and the framebuffer-based drawing display scheme may also depend on components such as X11 (a window system for bitmap display) or directfb (a lightweight graphics library for providing hardware graphics acceleration, input device processing and abstraction) and is to be tested and verified further; however, the SDL2 can be loaded into the application process in the form of a dynamic library, so that less system resources are occupied, and the application process directly uses the SDL2 library in the form of function call without designing an additional communication protocol.
To sum up, the method provided by the embodiment of the invention can restore the marked image one by one without zooming the marked elements, so as to achieve high-definition display, the virtual canvas is slightly larger than the actual small-screen display area, the complete text can be drawn at the joint of the splicing seams, and then the target image is cut out, so that the virtual canvas can solve the splicing problem of the cross-screen text, various marked elements drawn by an upper computer can be projected onto the splicing screen group, the marked elements can be spliced into complete elements across the screens and output on the video, and the use experience of video splicing control is greatly improved.
Example four:
corresponding to the method embodiment, the embodiment of the invention provides an image annotation device which is applied to a screen system. Referring to fig. 6, a schematic structural diagram of an image annotation apparatus is shown, the image annotation apparatus includes:
the labeling element determining module 61 is used for determining a labeling element by the upper computer and sending the labeling element to each output box;
a labeling sub-element determining module 62, configured to process the labeling element by each output box to obtain a labeling sub-element of each output box;
and the labeling sub-element display module 63 is used for displaying the labeling sub-elements of the corresponding output boxes on each spliced screen.
According to the image labeling device provided by the embodiment of the invention, the upper computer can send the labeling elements to each output box, each output box processes the labeling elements to obtain the labeling sub-elements, and each spliced screen respectively displays the labeling sub-elements of the corresponding output box. In the method, the spliced screen group can be labeled, each spliced screen can respectively display the labeling sub-elements, and each labeling sub-element is part of the labeled content, so that the labeled content can be spliced in a cross-screen mode, and the spliced screen group can completely display the labeled content.
The marked elements are original images, the original images are bitmaps, and the image formats of the original images are bitmap formats.
The label sub-element is a sub-image; the label sub-element determining module is used for zooming the original image by each output box to obtain a target image with the same size as the spliced screen group; and each output box cuts the target image based on the display area of the corresponding spliced screen in the spliced screen group to obtain the subimage of each output box.
The labeled elements are vector coordinate sequences, wherein the vector coordinate sequences carry vector types and vector coordinates.
The marking element determining module is used for generating a vector coordinate sequence by an upper computer; and the upper computer encodes the vector coordinate sequence into binary data and sends the binary data to each output box.
The labeling sub-elements are labeling information of a vector coordinate sequence; the label sub-element determining module is used for decoding the binary data by each output box to obtain a vector coordinate sequence; each output box determines labeling information of the vector coordinate sequence covered to the output box.
The labeling sub-element determining module is used for determining labeling information of the vector coordinate sequence covered on each output box based on the output area of the output box.
The label sub-element display module is used for drawing the label information of the vector coordinate sequence covered on the output box on the virtual canvas by each output box to obtain the vector drawing label information of the output box; wherein a display area of the virtual canvas is larger than a display area of the tiled screen; and each spliced screen displays the corresponding vector drawing marking information of the output box through the virtual canvas.
Referring to fig. 7, another image annotation apparatus is shown, which further includes: the supplementary point smoothing processing module 64 is connected with the label sub-element display module 63, and the supplementary point smoothing processing module 64 is used for performing supplementary point smoothing processing on the vector drawing label information of the output boxes by each output box; and each spliced screen respectively displays the vector drawing and labeling information of the output box after the point supplementing smoothing processing.
The image annotation device provided by the embodiment of the invention has the same technical characteristics as the image annotation method provided by the embodiment, so that the same technical problems can be solved, and the same technical effects can be achieved.
EXAMPLE five
The embodiment of the invention also provides electronic equipment, which is used for operating the image annotation method; referring to fig. 8, an electronic device is shown, which includes a memory 100 and a processor 101, where the memory 100 is used to store one or more computer instructions, and the one or more computer instructions are executed by the processor 101 to implement the image annotation method.
Further, the electronic device shown in fig. 8 further includes a bus 102 and a communication interface 103, and the processor 101, the communication interface 103, and the memory 100 are connected through the bus 102.
The Memory 100 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 103 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used. The bus 102 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 8, but that does not indicate only one bus or one type of bus.
The processor 101 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 101. The Processor 101 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 100, and the processor 101 reads the information in the memory 100, and completes the steps of the method of the foregoing embodiment in combination with the hardware thereof.
The embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are called and executed by a processor, the computer-executable instructions cause the processor to implement the image annotation method, and specific implementation may refer to method embodiments, and is not described herein again.
The computer program product of the image annotation method and apparatus provided in the embodiments of the present invention includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the method in the foregoing method embodiments, and specific implementation may refer to the method embodiments, and details are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and/or the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, an electronic device, or a network device) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. The image annotation method is characterized by being applied to a screen system, wherein the screen system comprises a spliced screen group consisting of a plurality of spliced screens, an output box corresponding to each spliced screen, and an upper computer; the method comprises the following steps:
the upper computer determines a labeling element and sends the labeling element to each output box;
each output box processes the labeled elements to obtain labeled sub-elements of each output box;
and each spliced screen respectively displays the corresponding labeling sub-elements of the output box.
2. The method of claim 1, wherein the annotation element is an original image, the original image is a bitmap, and the image format of the original image is a bitmap format.
3. The method of claim 2, wherein the annotation sub-element is a sub-image; each output box processes the labeled elements to obtain labeled sub-elements of each output box, and the steps comprise:
each output box carries out scaling processing on the original image to obtain a target image with the same size as the spliced screen group;
and each output box cuts the target image based on the display area of the corresponding spliced screen in the spliced screen group to obtain the sub-image of each output box.
4. The method of claim 2, wherein the annotation element is a vector coordinate sequence, wherein the vector coordinate sequence carries a vector type and a vector coordinate.
5. The method of claim 4, wherein the host computer determines a tagging element, and the step of sending the tagging element into each of the output cartridges comprises:
the upper computer generates the vector coordinate sequence;
and the upper computer encodes the vector coordinate sequence into binary data and sends the binary data to each output box.
6. The method of claim 5, wherein the label sub-element is label information of the vector coordinate sequence; each output box processes the labeled elements to obtain labeled sub-elements of each output box, and the steps comprise:
each output box decodes the binary data to obtain the vector coordinate sequence;
and each output box respectively determines the labeling information of the vector coordinate sequence covered to the output box.
7. The method of claim 6, wherein the step of each output bin separately determining labeling information for the sequence of vector coordinates overlaid onto the output bin comprises:
and each output box determines the labeling information of the vector coordinate sequence covered to the output box based on the output area of the output box.
8. The method of claim 7, wherein the step of displaying the labeled subelements of the corresponding output boxes on each of the tiled screens comprises:
each output box draws the labeling information of the vector coordinate sequence covered on the output box on a virtual canvas to obtain the vector drawing labeling information of the output box; wherein a display area of the virtual canvas is larger than a display area of the tiled screen;
and each spliced screen displays the corresponding vector drawing marking information of the output box through a virtual canvas.
9. The method of claim 8, wherein each output box draws the label information overlaid onto the sequence of vector coordinates for that output box on a virtual canvas, the method further comprising, after the step of obtaining the vector drawing label information for the output box:
each output box carries out point-supplementing smoothing processing on the vector drawing and labeling information of the output box;
and each spliced screen respectively displays the vector drawing and labeling information of the output box after the point supplementing smoothing processing.
10. An image annotation device is characterized by being applied to a screen system, wherein the screen system comprises a spliced screen group consisting of a plurality of spliced screens, an output box corresponding to each spliced screen, and an upper computer; the device comprises:
the upper computer is used for determining a label element and sending the label element to each output box;
the labeling sub-element determining module is used for processing the labeling elements by each output box to obtain the labeling sub-elements of each output box;
and the label sub-element display module is used for displaying the corresponding label sub-elements of the output box on each spliced screen.
CN202111477954.8A 2021-12-06 2021-12-06 Image labeling method and device Active CN114143477B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111477954.8A CN114143477B (en) 2021-12-06 2021-12-06 Image labeling method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111477954.8A CN114143477B (en) 2021-12-06 2021-12-06 Image labeling method and device

Publications (2)

Publication Number Publication Date
CN114143477A true CN114143477A (en) 2022-03-04
CN114143477B CN114143477B (en) 2023-12-08

Family

ID=80384294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111477954.8A Active CN114143477B (en) 2021-12-06 2021-12-06 Image labeling method and device

Country Status (1)

Country Link
CN (1) CN114143477B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101593072A (en) * 2009-04-28 2009-12-02 广东威创视讯科技股份有限公司 Piece together the disconnected method for processing pen of piece in the wall multi-person labeling system
JP2012063974A (en) * 2010-09-16 2012-03-29 Dainippon Printing Co Ltd Stroke display system and program
CN102855115A (en) * 2012-08-30 2013-01-02 广东威创视讯科技股份有限公司 Method and system for displaying character label based on splicing wall
US20140009422A1 (en) * 2012-07-09 2014-01-09 Kazuhisa ARIGAYA Screen control system, screen control device, and information processing terminal
CN104571992A (en) * 2014-12-30 2015-04-29 广东威创视讯科技股份有限公司 Simultaneous multi-screen marking method and device
CN104834431A (en) * 2015-04-03 2015-08-12 广东威创视讯科技股份有限公司 Spliced display system whole-wall labeling method and system and spliced display system
CN105430299A (en) * 2015-11-27 2016-03-23 广东威创视讯科技股份有限公司 Joined screen signal source annotation method and system
CN106598520A (en) * 2016-12-12 2017-04-26 广东威创视讯科技股份有限公司 Multi-screen labeling method and splicing wall
CN106648317A (en) * 2016-12-15 2017-05-10 广东威创视讯科技股份有限公司 Marking method, system and device of spliced wall display interface
CN107632811A (en) * 2017-09-30 2018-01-26 威创集团股份有限公司 The mask method and image mosaic wall system of image mosaic wall
CN110874959A (en) * 2018-09-03 2020-03-10 深圳市鹰硕技术有限公司 Multi-terminal same-screen teaching system and teaching method
CN111752557A (en) * 2019-03-27 2020-10-09 阿里巴巴集团控股有限公司 Display method and device
CN112099744A (en) * 2020-08-26 2020-12-18 西安万像电子科技有限公司 Image display method, device and system
CN113571008A (en) * 2021-07-16 2021-10-29 西安诺瓦星云科技股份有限公司 Bright and dark line adjusting method, device and system of spliced display screen

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101593072A (en) * 2009-04-28 2009-12-02 广东威创视讯科技股份有限公司 Piece together the disconnected method for processing pen of piece in the wall multi-person labeling system
JP2012063974A (en) * 2010-09-16 2012-03-29 Dainippon Printing Co Ltd Stroke display system and program
US20140009422A1 (en) * 2012-07-09 2014-01-09 Kazuhisa ARIGAYA Screen control system, screen control device, and information processing terminal
CN102855115A (en) * 2012-08-30 2013-01-02 广东威创视讯科技股份有限公司 Method and system for displaying character label based on splicing wall
CN104571992A (en) * 2014-12-30 2015-04-29 广东威创视讯科技股份有限公司 Simultaneous multi-screen marking method and device
CN104834431A (en) * 2015-04-03 2015-08-12 广东威创视讯科技股份有限公司 Spliced display system whole-wall labeling method and system and spliced display system
CN105430299A (en) * 2015-11-27 2016-03-23 广东威创视讯科技股份有限公司 Joined screen signal source annotation method and system
CN106598520A (en) * 2016-12-12 2017-04-26 广东威创视讯科技股份有限公司 Multi-screen labeling method and splicing wall
CN106648317A (en) * 2016-12-15 2017-05-10 广东威创视讯科技股份有限公司 Marking method, system and device of spliced wall display interface
CN107632811A (en) * 2017-09-30 2018-01-26 威创集团股份有限公司 The mask method and image mosaic wall system of image mosaic wall
CN110874959A (en) * 2018-09-03 2020-03-10 深圳市鹰硕技术有限公司 Multi-terminal same-screen teaching system and teaching method
CN111752557A (en) * 2019-03-27 2020-10-09 阿里巴巴集团控股有限公司 Display method and device
CN112099744A (en) * 2020-08-26 2020-12-18 西安万像电子科技有限公司 Image display method, device and system
CN113571008A (en) * 2021-07-16 2021-10-29 西安诺瓦星云科技股份有限公司 Bright and dark line adjusting method, device and system of spliced display screen

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JIANG YAN 等: "SPUM: A Screen Partition Update Method for Embedded Multi-Window Systems", 《IEEE》 *
卢如西;: "空间三维非接触式多点触摸系统的设计与实现", 硅谷, no. 01 *
张繁;王章野;吴侃侃;汪飞;徐海波;: "大屏幕拼接可视化技术的研究进展", 计算机辅助设计与图形学学报, no. 01 *

Also Published As

Publication number Publication date
CN114143477B (en) 2023-12-08

Similar Documents

Publication Publication Date Title
CN110287368B (en) Short video template design drawing generation device and short video template generation method
US7800636B2 (en) Magnification engine
CN111209721A (en) Bitmap font realization method and device, electronic equipment and storage medium
CN110532497B (en) Method for generating panorama, method for generating three-dimensional page and computing device
CN112651475B (en) Two-dimensional code display method, device, equipment and medium
CN104050185A (en) Zoom-display processing method and device for page contents
CN111193878B (en) Multimedia text information processing method, device, medium and electronic equipment
CN115439609B (en) Three-dimensional model rendering method, system, equipment and medium based on map service
JP4370438B2 (en) Vector image drawing apparatus, vector image drawing method and program
CN115861609A (en) Segmentation labeling method of remote sensing image, electronic device and storage medium
US20130009965A1 (en) Animation display device
CN111260750A (en) Method and device for processing openFL drawn vector graphics and electronic equipment
CN112184538B (en) Image acceleration method, related device, equipment and storage medium
CN114143477B (en) Image labeling method and device
CN115988170B (en) Method and device for clearly displaying English characters in real-time video combined screen in cloud conference
CN107193815B (en) Page code processing method, device and equipment
CN111064986B (en) Animation data sending method with transparency, animation data playing method and computer equipment
US20050128217A1 (en) Device, system and method for video signal modification
CN111372009B (en) Image processing method and processing equipment
CN115391692A (en) Video processing method and device
US11120317B2 (en) Graphic identification code generation method and apparatus
JP3603593B2 (en) Image processing method and apparatus
JP6891811B2 (en) Information processing equipment, information processing methods, and programs
CN108897537A (en) Document display method, computer-readable medium and a kind of computer
CN112583976B (en) Graphic code display method, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant