CN111918016A - Efficient real-time picture marking method in video call - Google Patents
Efficient real-time picture marking method in video call Download PDFInfo
- Publication number
- CN111918016A CN111918016A CN202010725304.XA CN202010725304A CN111918016A CN 111918016 A CN111918016 A CN 111918016A CN 202010725304 A CN202010725304 A CN 202010725304A CN 111918016 A CN111918016 A CN 111918016A
- Authority
- CN
- China
- Prior art keywords
- image
- data
- extracting
- annotation
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4788—Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/485—End-user interface for client configuration
- H04N21/4858—End-user interface for client configuration for modifying screen layout parameters, e.g. fonts, size of the windows
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
An efficient real-time picture annotation method in a video call comprises the following steps: constructing a video picture with the same proportion on the side needing to be marked by taking a sending video picture of a sender as a reference; constructing and labeling canvas for the constructed video pictures with the same proportion; constructing and labeling images on the constructed video pictures with the same proportion; extracting an image array near the vertex of the annotated image, extracting an image in a preset range of the vertex or the circle center of the annotated image, and extracting an image gray pixel array; constructing and marking real-time data through a pixel number group; and analyzing the marked data. The method firstly ensures that the position of the label is accurately displayed on the canvas of the receiver by designing reasonable video display and label proportion, secondly highly simplifies the data by data assembly, improves the transmission efficiency, and finally ensures that the graphic positions of the label and the receiver have no dislocation by the similarity calculation of the video images at the graphic vertexes.
Description
Technical Field
The invention relates to the technical field of video call and real-time data, in particular to a high-efficiency real-time picture marking method in video call.
Background
In the process of receiving and dealing with police, the police often involves real-time command and dispatch, wherein real-time video call is a common command mode, in the commanding process, the real-time video pictures are usually transmitted back to the commanding hall by the on-site policemen through mobile phones, the commanding center makes decisions according to the transmitted video pictures, however, the on-site situation is often more complex, the two parties need to communicate with each other, and at this time, real-time labeling needs to be performed in a video picture to remind the on-site policemen to pay attention to a specific on-site area or a specific material evidence, etc., the command center can label graphs such as line segments, circles, rectangles, arrows, etc. in the video picture, and the on-site policemen make corresponding feedback according to the real-time labeling, however, in the prior art, due to the fact that the video display and annotation proportion is unreasonable, and a data transmission model is complex, the annotation accuracy and the transmission timeliness cannot be guaranteed.
Disclosure of Invention
In view of the above, the present invention is proposed to provide an efficient real-time picture annotation method in video call that overcomes or at least partially solves the above mentioned problems.
An efficient real-time picture annotation method in a video call comprises the following steps:
s100, constructing a video picture with the same proportion on the side needing to be marked by taking a video picture sent by a sender as a reference, wherein the width of the video picture is w, and the height of the video picture is h;
s200, constructing and labeling canvas for the constructed video picture with the same proportion, laying a transparent canvas on the video picture, wherein the width of the transparent canvas is w, the height of the transparent canvas is h, and the height and the width of the canvas are proportionally segmented according to a preset proportion;
s300, constructing an annotated image on the constructed video picture with the same proportion, extracting the coordinate position of the annotated image drawn on the canvas by a user, and extracting the coordinate position of the key point of the annotated image;
s400, extracting an image array near the vertex of the marked image, extracting an image in a preset range of the vertex or the circle center of the marked image, and extracting an image gray pixel array;
s500, constructing real-time annotation data, packaging the graph position data obtained in the S300 and the image data near the vertex or the circle center obtained in the S400, and pushing the data to an annotation receiving party in an instant pushing mode.
S600, analyzing the marked data: and analyzing the annotation data according to the defined format to obtain position data of the graph and image data near the vertex, extracting and analyzing the image data near the vertex in the real-time video, performing similarity calculation with the image data obtained in the S400, and drawing the corresponding graph if the similarity is higher than a preset threshold value.
Further, in S200, the height and the width of the canvas are segmented in an equal ratio according to a preset ratio, the preset ratio is 1000, the height and the width of the canvas are segmented in an equal ratio, the width and the height are respectively divided into 1000 segments, each segment of the width is w/1000, and each segment of the height is h/1000.
Further, in S300, extracting the coordinate position of the key point of the annotation image at least includes: line segment, circle, rectangle, arrow key point coordinate position.
Further, the method for extracting the coordinate position of the key point of the annotation image comprises the following steps:
taking the upper left corner of a video picture as a coordinate system, the downward direction as the positive y direction and the rightward direction as the positive x direction, and acquiring the coordinate positions (w) of two ends of a line segment when the marked image is the line segment1,h1) And (w)2,h2) And the acquired data is subjected to data assembly to be 0, w1,h1,w2,h2", where 0 denotes a label as a line segment.
Further, the method for extracting the coordinate position of the key point of the annotation image further comprises the following steps: when the marked image is a circle, the center coordinates (w, h) of the circle are obtained, the radius is r, and the obtained data are assembled into '1, w, h, r', wherein 1 represents that the marked image is a circle.
Further, the method for extracting the coordinate position of the key point of the annotation image further comprises the following steps: when the annotation image is rectangular: coordinate positions (w) of four vertices of a rectangle are obtained1,h1)、(w2,h2)、(w3,h3)、
(w4,h4) And the acquired data is subjected to data assembly to form 2, w1,h1,w2,h2,w3,h3,w4,h4", where 2 denotes a notation of a rectangle.
Further, extractingThe method for labeling the coordinate positions of the image key points further comprises the following steps: when the annotation image is an arrow: coordinate positions (w) of both ends of the arrow are obtained1,h1) And (w)2,h2) And the acquired data is subjected to data assembly to form 3, w1,h1,w2,h2", where 3 denotes a notation as a line segment.
Further, in S400, an image within a preset range of a vertex or a circle center of the labeled image is extracted, where the preset range is a 10 × 10 pixel range.
Further, the extraction of the image gray pixel array is:
wherein p is1×1,.......,Expressing each marked point pixel, converting the gray array into a character string to obtain
Further, in S600, a cosine similarity calculation method is used for similarity calculation.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
the high-efficiency real-time picture marking method in the video call can extract features of marks with different shapes, firstly ensures that the marked positions are accurately displayed on canvas of a receiver by designing reasonable video display and marking proportion, secondly highly simplifies data by data assembly, improves transmission efficiency, and finally ensures that the graphic positions of the marker and the receiver do not have any dislocation by calculating the similarity of video images at the graphic vertexes. The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart of an efficient real-time image annotation method in a video call according to embodiment 1 of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Example 1
The embodiment discloses a high-efficiency real-time picture marking method in video call, which comprises the following steps:
s100, constructing a video picture with the same proportion on the side needing to be marked by taking a video picture sent by a sender as a reference, wherein the width of the video picture is w, and the height of the video picture is h. It can be understood that, in this embodiment, the sender is a field policeman, the party to be labeled is a commander in the command hall, the field policeman usually transmits a real-time video image back to the command hall through a mobile phone, and the command center makes a decision according to the transmitted video image.
S200, constructing and labeling canvas for the constructed video picture with the same proportion, laying a transparent canvas on the video picture, wherein the width of the transparent canvas is w, the height of the transparent canvas is h, and the height and the width of the canvas are proportionally segmented according to a preset proportion.
In some preferred embodiments, in S200, the height and the width of the canvas are proportionally segmented according to a preset proportion, the preset proportion is 1000, the height and the width of the canvas are proportionally segmented, the width and the height are respectively divided into 1000 segments, the width is w/1000 segment, and the height is h/1000 segment.
S300, constructing an annotated image on the constructed video picture with the same proportion, extracting the coordinate position of the annotated image drawn on the canvas by the user, and extracting the coordinate position of the key point of the annotated image.
In this embodiment, extracting the coordinate position of the key point of the annotation image at least includes: line segment, circle, rectangle, arrow key point coordinate position.
In some preferred embodiments, the method for extracting the coordinate position of the key point of the annotation image comprises the following steps:
taking the upper left corner of a video picture as a coordinate system, the downward direction as the positive y direction and the rightward direction as the positive x direction, and acquiring the coordinate positions (w) of two ends of a line segment when the marked image is the line segment1,h1) And (w)2,h2) And the acquired data is subjected to data assembly to be 0, w1,h1,w2,h2", where 0 denotes a label as a line segment.
In some preferred embodiments, the method for extracting the coordinate positions of the key points of the annotation image further comprises: when the marked image is a circle, the center coordinates (w, h) of the circle are obtained, the radius is r, and the obtained data are assembled into '1, w, h, r', wherein 1 represents that the marked image is a circle.
In some preferred embodiments, the method for extracting the coordinate positions of the key points of the annotation image further comprises: when the annotation image is rectangular: coordinate positions (w) of four vertices of a rectangle are obtained1,h1)、(w2,h2)、(w3,h3)、(w4,h4) And the acquired data is subjected to data assembly to form 2, w1,h1,w2,h2,w3,h3,w4,h4", where 2 denotes a notation of a rectangle.
In some preferred embodiments, the method for extracting the coordinate positions of the key points of the annotation image further comprises: when the annotation image is an arrow: coordinate positions (w) of both ends of the arrow are obtained1,h1) And (w)2,h2) And the acquired data is subjected to data assembly to form 3, w1,h1,w2,h2", where 3 denotes a notation as a line segment.
S400, extracting an image array near the vertex of the marked image, extracting an image in a preset range of the vertex or the circle center of the marked image, and extracting an image gray pixel array; in some preferred embodiments, the image within a preset range of the vertex or the center of the annotation image is extracted, wherein the preset range is a 10 × 10 pixel range.
In some preferred embodiments, the image gray pixel array is taken as:
wherein p is1×1,.......,Expressing each marked point pixel, converting the gray array into a character string to obtain
S500, constructing real-time annotation data, packaging the graph position data obtained in the S300 and the image data near the vertex or the circle center obtained in the S400, and pushing the data to an annotation receiving party in an instant pushing mode.
S600, analyzing the marked data: and analyzing the annotation data according to the defined format to obtain position data of the graph and image data near the vertex, extracting and analyzing the image data near the vertex in the real-time video, performing similarity calculation with the image data obtained in the S400, and drawing the corresponding graph if the similarity is higher than a preset threshold value. In some preferred embodiments, the similarity calculation is performed using a cosine similarity calculation method.
The high-efficiency real-time picture marking method in the video call can extract features of marks with different shapes, firstly ensures that the marked positions are accurately displayed on canvas of a receiver by designing reasonable video display and marking proportion, secondly highly simplifies data by data assembly, improves transmission efficiency, and finally ensures that the graphic positions of the marker and the receiver do not have any dislocation by calculating the similarity of video images at the graphic vertexes. The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
It should be understood that the specific order or hierarchy of steps in the processes disclosed is an example of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged without departing from the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not intended to be limited to the specific order or hierarchy presented.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, invention lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby expressly incorporated into the detailed description, with each claim standing on its own as a separate preferred embodiment of the invention.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. Of course, the processor and the storage medium may reside as discrete components in a user terminal.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in memory units and executed by processors. The memory unit may be implemented within the processor or external to the processor, in which case it can be communicatively coupled to the processor via various means as is known in the art.
What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, to the extent that the term "includes" is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean a "non-exclusive or".
Claims (10)
1. An efficient real-time picture annotation method in a video call is characterized by comprising the following steps:
s100, constructing a video picture with the same proportion on the side needing to be marked by taking a video picture sent by a sender as a reference, wherein the width of the video picture is w, and the height of the video picture is h;
s200, constructing and labeling canvas for the constructed video picture with the same proportion, laying a transparent canvas on the video picture, wherein the width of the transparent canvas is w, the height of the transparent canvas is h, and the height and the width of the canvas are proportionally segmented according to a preset proportion;
s300, constructing an annotated image on the constructed video picture with the same proportion, extracting the coordinate position of the annotated image drawn on the canvas by a user, and extracting the coordinate position of the key point of the annotated image;
s400, extracting an image array near the vertex of the marked image, extracting an image in a preset range of the vertex or the circle center of the marked image, and extracting an image gray pixel array;
s500, constructing real-time annotation data, packaging the graph position data obtained in the S300 and the image data near the vertex or the circle center obtained in the S400, and pushing the data to an annotation receiver in an instant pushing mode;
s600, analyzing the marked data: and analyzing the annotation data according to the defined format to obtain position data of the graph and image data near the vertex, extracting and analyzing the image data near the vertex in the real-time video, performing similarity calculation with the image data obtained in the S400, and drawing the corresponding graph if the similarity is higher than a preset threshold value.
2. The method for real-time image annotation with high efficiency in video call as claimed in claim 1, wherein in S200, the height and width of the canvas are segmented proportionally according to the preset ratio, the preset ratio is 1000, the height and width of the canvas are segmented proportionally, the width and height are respectively divided into 1000 segments, the width is w/1000 segments, and the height is h/1000 segments.
3. The method as claimed in claim 1, wherein the step S300 of extracting the coordinate position of the key point of the annotation image at least comprises: line segment, circle, rectangle, arrow key point coordinate position.
4. The method as claimed in claim 3, wherein the method for extracting the coordinates of the key points of the annotation image comprises:
taking the upper left corner of a video picture as a coordinate system, the downward direction as the positive y direction and the rightward direction as the positive x direction, and acquiring the coordinate positions (w) of two ends of a line segment when the marked image is the line segment1,h1) And (w)2,h2) And the acquired data is subjected to data assembly to be 0, w1,h1,w2,h2", where 0 denotes a label as a line segment.
5. The method for efficient real-time annotation of scenes in a video call of claim 3, wherein the method for extracting the coordinates of the key points of the annotated image further comprises: when the marked image is a circle, the center coordinates (w, h) of the circle are obtained, the radius is r, and the obtained data are assembled into '1, w, h, r', wherein 1 represents that the marked image is a circle.
6. The method for efficient real-time annotation of scenes in a video call of claim 3, wherein the method for extracting the coordinates of the key points of the annotated image further comprises: when the annotation image is rectangular: coordinate positions (w) of four vertices of a rectangle are obtained1,h1)、(w2,h2)、(w3,h3)、(w4,h4) And the acquired data is subjected to data assembly to form 2, w1,h1,w2,h2,w3,h3,w4,h4", where 2 denotes a notation of a rectangle.
7. The method for efficient real-time annotation of scenes in a video call of claim 3, wherein the method for extracting the coordinates of the key points of the annotated image further comprises: when the annotation image is an arrow: coordinate positions (w) of both ends of the arrow are obtained1,h1) And (w)2,h2) And the acquired data is subjected to data assembly to form 3, w1,h1,w2,h2", wherein 3 denotes a symbol denoted byAnd (6) line segments.
8. The method as claimed in claim 1, wherein in S400, the image within a predetermined range of 10 × 10 pixels is extracted from the vertex or the center of the annotation image.
10. The method for efficient real-time annotation of pictures in a video call as claimed in claim 1, wherein in S600, the similarity calculation is performed by using a cosine similarity calculation method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010725304.XA CN111918016A (en) | 2020-07-24 | 2020-07-24 | Efficient real-time picture marking method in video call |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010725304.XA CN111918016A (en) | 2020-07-24 | 2020-07-24 | Efficient real-time picture marking method in video call |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111918016A true CN111918016A (en) | 2020-11-10 |
Family
ID=73280826
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010725304.XA Pending CN111918016A (en) | 2020-07-24 | 2020-07-24 | Efficient real-time picture marking method in video call |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111918016A (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5521843A (en) * | 1992-01-30 | 1996-05-28 | Fujitsu Limited | System for and method of recognizing and tracking target mark |
CN104506621A (en) * | 2014-12-24 | 2015-04-08 | 北京佳讯飞鸿电气股份有限公司 | Method for performing long-distance guidance by use of video annotation |
CN105608209A (en) * | 2015-12-29 | 2016-05-25 | 南威软件股份有限公司 | Video labeling method and video labeling device |
CN107333087A (en) * | 2017-06-27 | 2017-11-07 | 京东方科技集团股份有限公司 | A kind of information sharing method and device based on video session |
WO2018104834A1 (en) * | 2016-12-07 | 2018-06-14 | Yogesh Chunilal Rathod | Real-time, ephemeral, single mode, group & auto taking visual media, stories, auto status, following feed types, mass actions, suggested activities, ar media & platform |
CN108196927A (en) * | 2017-12-29 | 2018-06-22 | 北京淳中科技股份有限公司 | A kind of mask method, device and system |
CN108337532A (en) * | 2018-02-13 | 2018-07-27 | 腾讯科技(深圳)有限公司 | Perform mask method, video broadcasting method, the apparatus and system of segment |
CN110248136A (en) * | 2019-07-05 | 2019-09-17 | 吴雅凝 | The device and method of flag transmission in a kind of communication of real-time audio and video |
CN110705405A (en) * | 2019-09-20 | 2020-01-17 | 阿里巴巴集团控股有限公司 | Target labeling method and device |
CN111367445A (en) * | 2020-03-31 | 2020-07-03 | 中国建设银行股份有限公司 | Image annotation method and device |
CN111368820A (en) * | 2020-03-06 | 2020-07-03 | 腾讯科技(深圳)有限公司 | Text labeling method and device and storage medium |
-
2020
- 2020-07-24 CN CN202010725304.XA patent/CN111918016A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5521843A (en) * | 1992-01-30 | 1996-05-28 | Fujitsu Limited | System for and method of recognizing and tracking target mark |
CN104506621A (en) * | 2014-12-24 | 2015-04-08 | 北京佳讯飞鸿电气股份有限公司 | Method for performing long-distance guidance by use of video annotation |
CN105608209A (en) * | 2015-12-29 | 2016-05-25 | 南威软件股份有限公司 | Video labeling method and video labeling device |
WO2018104834A1 (en) * | 2016-12-07 | 2018-06-14 | Yogesh Chunilal Rathod | Real-time, ephemeral, single mode, group & auto taking visual media, stories, auto status, following feed types, mass actions, suggested activities, ar media & platform |
CN107333087A (en) * | 2017-06-27 | 2017-11-07 | 京东方科技集团股份有限公司 | A kind of information sharing method and device based on video session |
CN108196927A (en) * | 2017-12-29 | 2018-06-22 | 北京淳中科技股份有限公司 | A kind of mask method, device and system |
CN108337532A (en) * | 2018-02-13 | 2018-07-27 | 腾讯科技(深圳)有限公司 | Perform mask method, video broadcasting method, the apparatus and system of segment |
CN110248136A (en) * | 2019-07-05 | 2019-09-17 | 吴雅凝 | The device and method of flag transmission in a kind of communication of real-time audio and video |
CN110705405A (en) * | 2019-09-20 | 2020-01-17 | 阿里巴巴集团控股有限公司 | Target labeling method and device |
CN111368820A (en) * | 2020-03-06 | 2020-07-03 | 腾讯科技(深圳)有限公司 | Text labeling method and device and storage medium |
CN111367445A (en) * | 2020-03-31 | 2020-07-03 | 中国建设银行股份有限公司 | Image annotation method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112348815B (en) | Image processing method, image processing apparatus, and non-transitory storage medium | |
US10129385B2 (en) | Method and apparatus for generating and playing animated message | |
CN102110235A (en) | Embedded media markers and systems and methods for generating and using them | |
CN110659633A (en) | Image text information recognition method and device and storage medium | |
CN103890779A (en) | Device and method for automatically identifying a QR code | |
CN112541484B (en) | Face matting method, system, electronic device and storage medium | |
CN113012075A (en) | Image correction method and device, computer equipment and storage medium | |
JP2007241356A (en) | Image processor and image processing program | |
US8917912B2 (en) | Object identification system and method of identifying an object using the same | |
CN112233062A (en) | Surface feature change detection method, electronic device, and storage medium | |
CN111767889A (en) | Formula recognition method, electronic device and computer readable medium | |
CN113297986A (en) | Handwritten character recognition method, device, medium and electronic equipment | |
CN111918016A (en) | Efficient real-time picture marking method in video call | |
CN104933430B (en) | A kind of Interactive Image Processing method and system for mobile terminal | |
CN113947529B (en) | Image enhancement method, model training method, component identification method and related equipment | |
CN109141457B (en) | Navigation evaluation method and device, computer equipment and storage medium | |
CN110717060A (en) | Image mask filtering method and device and storage medium | |
CN114399623B (en) | Universal answer identification method, system, storage medium and computing device | |
CN115861922A (en) | Sparse smoke and fire detection method and device, computer equipment and storage medium | |
CN113112531A (en) | Image matching method and device | |
CN114359352A (en) | Image processing method, apparatus, device, storage medium, and computer program product | |
US9092687B2 (en) | Automatically converting a sign and method for automatically reading a sign | |
CN115731554A (en) | Express mail list identification method and device, computer equipment and storage medium | |
CN113128470A (en) | Stroke recognition method and device, readable medium and electronic equipment | |
CN112201118B (en) | Logic board identification method and device and terminal equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201110 |