CN111757074A - Image sharing marking system - Google Patents

Image sharing marking system Download PDF

Info

Publication number
CN111757074A
CN111757074A CN201910250126.7A CN201910250126A CN111757074A CN 111757074 A CN111757074 A CN 111757074A CN 201910250126 A CN201910250126 A CN 201910250126A CN 111757074 A CN111757074 A CN 111757074A
Authority
CN
China
Prior art keywords
image
terminal
target object
annotation
processing unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910250126.7A
Other languages
Chinese (zh)
Inventor
刘德建
汪松
郭玉湖
陈宏�
方振华
关胤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Tianquan Educational Technology Ltd
Original Assignee
Fujian Tianquan Educational Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Tianquan Educational Technology Ltd filed Critical Fujian Tianquan Educational Technology Ltd
Priority to CN201910250126.7A priority Critical patent/CN111757074A/en
Publication of CN111757074A publication Critical patent/CN111757074A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

An image sharing annotation system comprises a first terminal and a second terminal, wherein the first terminal comprises an image input unit, a projector, a processing unit and a network module; the second terminal comprises a display module and a label input module; the image input unit is used for acquiring an image of a target object, the processing unit is used for receiving the image of the target object and transmitting the image to the second terminal through the network module, and the second terminal displays the image of the target object on the display module and receives the label input information input by the label input module; the second terminal is also used for sending the label input information to the first terminal; the processing unit is further used for calculating projection content according to the labeling input information, and the projector is used for projecting the projection content. The technical scheme of the invention can perfectly present the synchronous image-text information on a long distance, solves the problem of remotely sharing the marked content, and has a very high application prospect in the aspect of remote information sharing.

Description

Image sharing marking system
Technical Field
The invention relates to the field of optical analysis design, in particular to a system for marking after image sharing.
Background
The existing video shooting and interaction field can directly store shot pictures along with the development of computer technology, but lacks the technical means for interaction of video shooting persons and remote watching persons, and if the information of remote annotation can be acquired based on the acquired images, the generation and display of the expansion information are carried out, and the functions of various video setting terminals can be enriched. The pleasure of life is improved.
Disclosure of Invention
For this reason, it is necessary to provide a technical solution capable of optical acquisition sharing and labeling.
In order to achieve the above object, the inventor provides an image sharing annotation system, which includes a first terminal and a second terminal, wherein the first terminal includes an image input unit, a projector, a processing unit, and a network module; the second terminal comprises a display module and a label input module;
the image input unit is used for acquiring an image of a target object, the processing unit is used for receiving the image of the target object and transmitting the image to the second terminal through the network module, and the second terminal displays the image of the target object on the display module and receives the label input information input by the label input module; the second terminal is also used for sending the label input information to the first terminal;
the processing unit is further used for calculating projection content according to the labeling input information, and the projector is used for projecting the projection content.
Specifically, the mobile terminal further comprises a signal device, the signal device is used for emitting an optical signal, the processing unit is used for tracking the optical signal, determining a moving track of the signal device according to the optical signal, defining a selected area of the target object according to the moving track, and transmitting the selected area to the second terminal through the network module.
Further, the annotation entry information includes an annotation graph and a position relationship of the annotation graph with respect to the image of the target object, and the processing unit is configured to calculate the projection content according to the annotation entry information, and calculate the position relationship of the projection graph with respect to the target object according to the position relationship of the annotation graph with respect to the image of the target object; the projection content comprises a projection graph and a position relation of the projection graph relative to the target object.
Further, the second terminal annotation recording module is also used for recording the position relation between the annotation graph and the selected area image.
Specifically, the processing unit is further configured to receive a user instruction, and the processing unit is further configured to store the target object image on which the projection annotation information is superimposed according to the user instruction.
Specifically, the image input unit and the projector are arranged above the desktop, and the image input unit and the projector are connected through a support.
Compared with the prior art, the image acquired by the first terminal is shared to the second terminal, the image is labeled by the second terminal, and then the labeled content is transmitted back to the first terminal for projection, so that the remote synchronous image-text information can be perfectly presented, the problem of remotely sharing the labeled content is solved, and the remote image-text information sharing method and the remote image-text information sharing device have a very high application prospect in the aspect of remote information sharing.
Drawings
FIG. 1 is a schematic diagram of an image acquisition-based annotation system according to an embodiment of the present invention;
FIG. 2 is a schematic illustration of a labeling effect according to an embodiment;
FIG. 3 is a schematic diagram of an image sharing annotation system according to an embodiment;
fig. 4 is a schematic diagram of the marking effect according to the embodiment.
Description of the reference numerals
10. A first terminal;
102. an image input unit;
104. a projector;
106. a processing unit;
108. a network module;
110. a signaling device;
20. a second terminal;
202. a display module;
204. and a label input module.
Detailed Description
To explain technical contents, structural features, and objects and effects of the technical solutions in detail, the following detailed description is given with reference to the accompanying drawings in conjunction with the embodiments.
Referring to fig. 1, a labeling system based on image acquisition according to the present invention includes a first terminal 10, where the first terminal includes an image input unit 102, a projector 104, and a processing unit 106;
in our solution, the image input unit is disposed facing the same direction as the projector, and can generally point to the same target object, where the image input unit is configured to obtain an optical signal of the target object, the processing unit is configured to identify specific content of the target object, generate corresponding annotation information according to the content of the target object, and calculate a position relationship between the annotation information and the target object, and the projector is configured to project the annotation information according to a result of the position relationship calculated by the processing unit. Specifically, the target object may be a plane content, or may be a stereo object, and the projector may project a plane or a surface of the stereo object. According to the specific content of the target object, such as characters, pictures, spatial configurations, artistic forms, decorations and the like, the corresponding annotation information is generated according to the specific content, after the specific content is identified, annotation information of the content is called in a related database, the projection position of the annotation information (such as a flat position projected on an image with small color change gradient) is calculated, and then the projection annotation information and the position coordinate relation are transmitted to a projector to be displayed through the projector. For example, in an embodiment, the image input unit and the projector are disposed at a first terminal, the first terminal is a handheld unit, and a user can enter a museum by holding the first terminal by hand, so as to scan an exhibit in the museum, the processing unit identifies specific content of a target object, such as a bronze ware and corresponding annotation information, such as a knowledge introduction related to the bronze ware, and projects the knowledge introduction related to the bronze ware on the surface thereof, such as a painting and calligraphy work and corresponding annotation information, such that the step is performed, and projects the relevant annotation information introduction on the surface of the painting and calligraphy work. The label information can be obtained through a preset database or through network searching. The specific projection position can be calculated by a preset rule such as the color change gradient, the projection for the stereo object can be spatially positioned by more than two image input units, and the position correspondence of the spatial coordinates is carried out to set the focal length of the projector. Through the technical scheme, the specific content of the target object can be identified, the projected content can be determined according to the specific content, the target object is projected in the flat position area through color operation, and the technical effect of labeling information on projection on the surface of the object can be further realized through processing the position coordinates by the processing unit. The method meets the desire of the public on acquiring the knowledge content, and solves the technical problem of composite content display in the prior art through a mixed reality technology.
In a further application example, in order to better embody the functions that the annotation system can achieve, specifically, in our embodiment, our target object is a paper media object; the specific content is content presented by a paper media carrier, the content presented by the paper media carrier can be characters and graphs, the image input unit transmits the content to the processing unit after acquiring the content of the paper media carrier, the processing unit is further used for identifying the images, judging whether the corresponding relation between the paper media content acquired by the input unit, such as the characters and the graphs, and the corresponding labeling information is stored in the database, and if the judgment result is yes, acquiring the corresponding labeling information stored in the database. In the embodiment that the paper media carrier is a textbook, a student sits in front of a learning table during learning, and the image input unit and the projector are adjusted to be simultaneously aligned with a desktop in front of the student. The processing unit judges that some characters of the textbook are dialect, the processing unit needs to determine the coordinates of the characters in the plane of the camera, meanwhile, the processing unit searches annotations of the dialect, namely annotation information corresponding to text content, in a database, determines the coordinate range in which the annotations can be projected, namely, the specific coordinates of the blank position on the paper media are determined, the projection coordinates are determined through typesetting again and the like, and finally, the projector is controlled to project the annotation information on the blank position on the side edge of the textbook characters. Through the mode, the scheme of the invention realizes the extended display of the auxiliary learning content, the user can directly project the learning content in a mixed reality mode without additionally purchasing teaching parameters, the information acquisition efficiency is effectively improved, the extended content can be selectively displayed or closed according to the requirements of students, and the practicability of the extended learning is obviously improved.
In other specific embodiments, the target object is a paper media operation, the image input unit obtains an image of the paper media operation, the processing unit analyzes the image, mainly identifies answer content of the target object, the answer content can be simply scratched, such as scratching an answer area directly, or pre-recording content of a blank exercise book through a database and then performing image subtraction operation to obtain the answer content, and further, an answer formula on the operation can be identified according to deep learning, such as a formula of 5-3 to 8, which can be automatically determined as an error through machine learning. And finally, generating correct or wrong marking information according to the answering content on the operation and the position of the answering content, and the correct coordinate relation of each correct or wrong marking information, so that the effect of correcting each question on the operation can be achieved. In the embodiment shown in fig. 2, the effect of automatic correction is shown, and the hook-shaped shape is projected on the operation of the desktop through automatic judgment of the machine. By the scheme, the effect of processing the auxiliary video image is better achieved, and the problem of projective superposition display of the composite information is solved.
In some other embodiments, the processing unit is further configured to receive user instructions, where the user instructions may be input by using existing means, such as multiple modes including a key, a mouse, a touch screen, and the like included in the first terminal, and after receiving the user input instructions, the processing unit is further configured to obtain an image of the target object through the image input unit according to the user instructions, where the image of the target object is obtained by the image input unit when the projection annotation information is superimposed on the native content of the target object, and the processing unit stores the image of the target object on which the projection annotation information is superimposed. In the embodiment, the projection marking information is a correction result or wrong question analysis, and the target object is a student answer result, so that the technical effect of conveniently storing and summarizing the wrong question set by the student can be achieved through the scheme.
In some embodiments, in order to obtain the target object on the learning table, it is only necessary that the image input unit and the projector which are cooperated point to the same table top area. In some classrooms, it is also possible to mount several first terminals on the ceiling, where the image input unit and the projector of the first terminal are directed to each table top. The above-described scheme of the present embodiment solves the problem of setting the first terminal.
Specifically, the image input unit needs only to input a single frame of picture at least, and therefore, the image input unit can be configured as a camera, in a preferred embodiment, the content of the target object needs to be tracked in real time, the image input unit can be preferably configured as a video camera, and certainly, the image input unit can also be a simple optical sensor array, and the processing unit controls the working duration thereof as required to obtain a single picture or a plurality of continuous videos.
In other embodiments, as shown in fig. 3, we further perform remote annotation information projection, and we provide an image sharing annotation system, which includes a first terminal 10 and a second terminal 20, where the first terminal includes an image input unit 102, a projector 104, a processing unit 106, and a network module 108; the second terminal 20 comprises a display module 202 and an annotation entry module 204. The image input unit 102 is used for acquiring a video of a target object, and may be a single-frame picture or a multi-frame video. Specifically, the target object may be a plane content, or may be a stereo object, and the projector may project a plane or a surface of the stereo object. But according to the concrete content of the target object, such as characters, pictures, space configuration, artistic forms, decorations and the like. After receiving the image of the target object, the processing unit 106 transmits the image of the target object to the second terminal 20 through the network module 108, and the second terminal displays the image of the target object on the display module 202 and receives the label entry information input by the label entry module; the second terminal is also used for sending the label input information to the first terminal. The data receiving and transmitting of the second terminal can be performed through the second network module. The second terminal may also have its own processor for controlling the above steps, and the second terminal may be a handheld mobile phone, a tablet, a notebook computer, a desktop computer, etc. After the first terminal receives the annotation information sent back by the second terminal, the processing unit is further configured to calculate projection content according to the annotation input information, and instruct the projector to project the projection content. As can be seen from the figure, the second terminal is preferably a tablet computer, and the display module is a touch screen thereof. The operator of the second terminal can directly write and draw on the screen of the flat panel as the input of the label entry information. Such label entry can be projected in front of the user of the first terminal via network forwarding. Fig. 4 shows the final effect of the second terminal after modification, and the modified hook-shaped shape of the second terminal is projected on the desktop by automatic judgment of the machine. Through the design, the user at the first terminal can share the content in front of the user through the camera, the user at the second terminal can see the same content with the first user, and the content is marked and shared through a marking means, so that the remote sharing of image-text information is realized, and the effects of remote guidance, annotation and education are achieved.
In some specific embodiments, the device further includes a signal device 110, the signal device is used for emitting an optical signal, the optical signal may be active light, or may be a colored block, and the like, and the signal device may be manually manipulated to emit a signal such as a movement track. The processing unit tracks the optical signal by analyzing the video, determines the moving track of the signal device according to the optical signal, demarcates the selected area of the target object according to the moving track, and only transmits the image of the selected area to the second terminal through the network module. And at the second terminal, the display terminal only needs to display the content of the selected area, and the marking input information only needs to mark the content of the selected area. Through the arrangement, resources required by network transmission can be saved, less bandwidth is occupied, and the technical effect of effectively selecting according to the gesture signal of the first terminal user is achieved.
In a further embodiment, the annotation entry information includes an annotation graph and a position relationship between the annotation graph and the image of the target object, for example, when the image of the target object is displayed at the second terminal, a position coordinate is established at the second terminal, and the coordinate where the image is located and the coordinate where the subsequently received annotation graph is located are both recorded. In the embodiment that only the image of the selected area is shared and transmitted, the image of the selected area is used as a reference to establish a coordinate system, the relative position of the annotation figure for annotating the selected area relative to the coordinate system of the image of the selected area also needs to be recorded, and when the position relation of the whole image is calculated by the first terminal, the position of the annotation image in the whole image is obtained after Cartesian coordinate system conversion is performed through the coordinate systems of the selected area and the whole image. Calculation of the projection position is then performed. Specifically, the processing unit is used for calculating the projection content according to the annotation entry information, and calculating the position relation of the projection graph relative to the target object according to the position relation of the annotation image relative to the image of the target object; the projection content comprises a projection graph and a position relation of the projection graph relative to the target object. Finally, the marked content is only required to be superposed and projected to the area where the target object is located through the projector. Through the scheme, the sharing effect of cross-device scene display and annotation operation is better achieved. The efficiency of the mutual learning is better improved.
It should be noted that, although the above embodiments have been described herein, the invention is not limited thereto. Therefore, based on the innovative concepts of the present invention, the technical solutions of the present invention can be directly or indirectly applied to other related technical fields by making changes and modifications to the embodiments described herein, or by using equivalent structures or equivalent processes performed in the content of the present specification and the attached drawings, which are included in the scope of the present invention.

Claims (6)

1. An image sharing annotation system is characterized by comprising a first terminal and a second terminal, wherein the first terminal comprises an image input unit, a projector, a processing unit and a network module; the second terminal comprises a display module and a label input module;
the image input unit is used for acquiring an image of a target object, the processing unit is used for receiving the image of the target object and transmitting the image to the second terminal through the network module, and the second terminal displays the image of the target object on the display module and receives the label input information input by the label input module; the second terminal is also used for sending the label input information to the first terminal;
the processing unit is further used for calculating projection content according to the labeling input information, and the projector is used for projecting the projection content.
2. The image sharing annotation system of claim 1, wherein the annotation entry information includes an annotation graphic and a positional relationship of the annotation graphic with respect to the image of the target object, and the processing unit is configured to calculate the projection content according to the annotation entry information includes calculating the positional relationship of the projection graphic with respect to the target object according to the positional relationship of the annotation image with respect to the image of the target object; the projection content comprises a projection graph and a position relation of the projection graph relative to the target object.
3. The image sharing annotation system of claim 1, further comprising a signal device, wherein the signal device is configured to emit an optical signal, the processing unit is configured to track the optical signal, determine a movement trajectory of the signal device according to the optical signal, define a selected area of the target object according to the movement trajectory, and transmit the selected area to the second terminal through the network module.
4. The image sharing annotation system of claim 3, wherein the second terminal annotation input module is further configured to record a positional relationship between the annotation graphic and the selected region image.
5. The image sharing annotation system of claim 1, wherein the processing unit is further configured to receive a user instruction, and the processing unit is further configured to store the target object image superimposed with the projection annotation information according to the user instruction.
6. The image sharing annotation system of claim 1, wherein the image input unit and the projector are disposed above a desktop, and further comprising a support, wherein the support is connected to the image input unit and the projector.
CN201910250126.7A 2019-03-29 2019-03-29 Image sharing marking system Pending CN111757074A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910250126.7A CN111757074A (en) 2019-03-29 2019-03-29 Image sharing marking system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910250126.7A CN111757074A (en) 2019-03-29 2019-03-29 Image sharing marking system

Publications (1)

Publication Number Publication Date
CN111757074A true CN111757074A (en) 2020-10-09

Family

ID=72672474

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910250126.7A Pending CN111757074A (en) 2019-03-29 2019-03-29 Image sharing marking system

Country Status (1)

Country Link
CN (1) CN111757074A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113596418A (en) * 2021-07-06 2021-11-02 作业帮教育科技(北京)有限公司 Correction-assisted projection method, device, system and computer program product
CN115185437A (en) * 2022-03-11 2022-10-14 亮风台(上海)信息科技有限公司 Projection interaction method, device, medium and program product

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1289086A (en) * 1999-09-21 2001-03-28 精工爱普生株式会社 Interactive display system
CN103428472A (en) * 2012-05-18 2013-12-04 郑州正信科技发展有限公司 Real object complete interactive communication method and device based on collaborative awareness
US9426416B2 (en) * 2012-10-17 2016-08-23 Cisco Technology, Inc. System and method for utilizing a surface for remote collaboration

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1289086A (en) * 1999-09-21 2001-03-28 精工爱普生株式会社 Interactive display system
CN103428472A (en) * 2012-05-18 2013-12-04 郑州正信科技发展有限公司 Real object complete interactive communication method and device based on collaborative awareness
US9426416B2 (en) * 2012-10-17 2016-08-23 Cisco Technology, Inc. System and method for utilizing a surface for remote collaboration

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113596418A (en) * 2021-07-06 2021-11-02 作业帮教育科技(北京)有限公司 Correction-assisted projection method, device, system and computer program product
CN115185437A (en) * 2022-03-11 2022-10-14 亮风台(上海)信息科技有限公司 Projection interaction method, device, medium and program product
WO2023168836A1 (en) * 2022-03-11 2023-09-14 亮风台(上海)信息科技有限公司 Projection interaction method, and device, medium and program product

Similar Documents

Publication Publication Date Title
US20230045386A1 (en) Interactive and shared surfaces
US9584766B2 (en) Integrated interactive space
Gurevich et al. Design and implementation of teleadvisor: a projection-based augmented reality system for remote collaboration
CN1951117B (en) Virtual flip chart method and apparatus
CN102685440B (en) The automatic selection of display information and switching
US20160261856A1 (en) Designing Content for Multi-View Displays
Adcock et al. Using projected light for mobile remote guidance
US20160335242A1 (en) System and Method of Communicating between Interactive Systems
US20210014456A1 (en) Conference device, method of controlling conference device, and computer storage medium
US10869009B2 (en) Interactive display
CN103376921A (en) Laser labeling system and method
US9658702B2 (en) System and method of object recognition for an interactive input system
CN113950822A (en) Virtualization of a physical active surface
CN111757074A (en) Image sharing marking system
CN111242704A (en) Method and electronic equipment for superposing live character images in real scene
WO2022174706A1 (en) Remote live streaming interaction method and system based on projection
CN111752376A (en) Labeling system based on image acquisition
US20110281252A1 (en) Methods and systems for reducing the number of textbooks used in educational settings
US20200007832A1 (en) Projector, computer readable memory medium, and display system
TW202016904A (en) Object teaching projection system and method thereof
Jedrysik et al. Interactive displays for command and control
CN115033128A (en) Electronic whiteboard control method based on image recognition, electronic whiteboard and readable medium
KR20160014759A (en) Apparatus and method for controlling image screen using portable terminal
CN106251714B (en) Simulation teaching system and method
US20230239442A1 (en) Projection device, display system, and display method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201009

RJ01 Rejection of invention patent application after publication