CN113794824B - Indoor visual document intelligent interactive acquisition method, device, system and medium - Google Patents
Indoor visual document intelligent interactive acquisition method, device, system and medium Download PDFInfo
- Publication number
- CN113794824B CN113794824B CN202111081077.2A CN202111081077A CN113794824B CN 113794824 B CN113794824 B CN 113794824B CN 202111081077 A CN202111081077 A CN 202111081077A CN 113794824 B CN113794824 B CN 113794824B
- Authority
- CN
- China
- Prior art keywords
- document
- image
- acquisition
- display
- carrier
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 72
- 230000000007 visual effect Effects 0.000 title claims abstract description 45
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 36
- 238000012545 processing Methods 0.000 claims abstract description 19
- 230000006870 function Effects 0.000 claims description 25
- 239000000969 carrier Substances 0.000 claims description 11
- 230000008859 change Effects 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 7
- 230000003321 amplification Effects 0.000 claims description 4
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 4
- 238000012800 visualization Methods 0.000 claims description 3
- 230000003993 interaction Effects 0.000 abstract description 9
- 230000001276 controlling effect Effects 0.000 description 26
- 230000000875 corresponding effect Effects 0.000 description 24
- 241001310793 Podium Species 0.000 description 18
- 230000008569 process Effects 0.000 description 17
- 238000010586 diagram Methods 0.000 description 10
- 230000000694 effects Effects 0.000 description 7
- 230000001960 triggered effect Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/957—Light-field or plenoptic cameras or camera modules
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/54—Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Strategic Management (AREA)
- Human Resources & Organizations (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Tourism & Hospitality (AREA)
- Entrepreneurship & Innovation (AREA)
- General Business, Economics & Management (AREA)
- Educational Technology (AREA)
- Economics (AREA)
- Marketing (AREA)
- Educational Administration (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Quality & Reliability (AREA)
- Operations Research (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Primary Health Care (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention belongs to the technical field of intelligent interaction, and particularly relates to an intelligent interactive acquisition method, device, system and medium for indoor visual documents. The intelligent interactive acquisition method for the indoor visual document comprises the following steps: controlling a wide-angle camera of an acquisition device to capture a scene image, the scene image comprising: at least one document outline; positioning and segmenting each document contour image from the scene image; acquiring an acquisition command queue, wherein the acquisition command queue comprises at least one acquisition command, and each acquisition command comprises position information of a designated document contour; a long-focus camera of the acquisition equipment is controlled to acquire a specified document corresponding to the position information, and image processing is carried out to obtain a document image; and sending the document image to a first display area for display. The invention also comprises a device, a system and a medium for executing the method. The invention can conveniently realize the digital submission and display of the written document.
Description
Technical Field
The invention relates to the technical field of intelligent interaction, in particular to an intelligent interactive acquisition method, device, system and medium for indoor visual documents.
Background
When teaching in class or daily meeting, information interaction is often needed between students and teachers in a writing mode. For example, in classroom teaching, a teacher needs to obtain the contents of a question written by a student in time. The participant needs to know the meeting-related content written by other participants at the time of meeting in time. At present, electronic devices such as an electronic writing board, a tablet computer, an intelligent writing tool and the like are adopted to collect writing information of students or participants. Because the writing information of each person can be collected by providing each student or participant with corresponding electronic equipment, the electronic equipment which is needed to be used and maintained is more, and the cost of the whole interaction system is high. And the long-term use of the screen of some electronic devices for writing is liable to adversely affect the eyesight of the user.
Disclosure of Invention
In view of the above, the embodiment of the invention provides an intelligent interactive acquisition method, device, system and medium for indoor visual documents, which are used for solving the technical problem that in the prior art, a user can realize the information interaction of written documents by means of writing by electronic equipment.
The technical scheme adopted by the invention is as follows:
in a first aspect, the present invention provides an intelligent interactive acquisition method for indoor visual documents, the method comprising:
s1: controlling a wide-angle camera of an acquisition device to capture a scene image, the scene image comprising: at least one document outline;
s2: positioning and segmenting each document contour image from the scene image;
s3: acquiring an acquisition command queue, wherein the acquisition command queue comprises at least one acquisition command, and each acquisition command comprises position information of a designated document contour;
s4: controlling a tele camera of the acquisition equipment to acquire a specified document corresponding to the position information according to the acquisition command queue, and performing image processing to obtain a document image;
s5: and sending the document image to a first display area for display.
Preferably, the method further comprises, between steps S2 and S3:
s6: and positioning and dividing a plurality of carrier images 505 corresponding to the carriers respectively carrying the documents from the scene images, and adjusting the relative position relationship of each carrier image 505 when being displayed in the second display area to be consistent with the relative position relationship of each carrier in the room.
Preferably, the first display area and the second display area are located on the same display interface, and the step S4: controlling a tele camera to collect a specified document corresponding to the position information according to the collection command queue, and performing image processing to obtain a document image comprises the following steps:
when a document image of a specified document corresponding to the position information on a specified carrier of a second display area is acquired, the document image of the specified document is displayed on the display interface so as to enter the first display area for display in an animated manner along a specified path.
Preferably, the step S4: controlling the tele camera to collect the appointed document corresponding to the position information according to the collection command queue, and performing image processing to obtain a document image further comprises the following steps:
and when the collection command queue comprises more than two collection commands, the collection progress of the designated documents corresponding to the collection commands on the carriers and the queuing sequence numbers of the designated documents on the carriers are displayed on the second display area.
Preferably, the method further comprises, prior to step S1:
s01: the document contour acquisition triggering step comprises one of the following triggering modes:
Controlling the wide-angle camera to acquire whether specified gestures exist on the carrier and in a preset range around the carrier in the scene image in real time;
receiving an input instruction of a user for designating a carrier in the scene image;
and receiving an operation instruction of a user on a virtual function control key of a third display area on the display interface.
Preferably, the controlling the wide-angle camera to acquire whether the specified gesture exists in the preset range on and around the carrier in the scene image in real time includes:
controlling and superposing a virtual trigger button to a designated area on the carrier image, wherein the virtual trigger button is in a first display state;
when the wide-angle camera recognizes that a specified gesture exists on the carrier, the hand of the user on the carrier is collected, and in a carrier image comprising an image of the hand, a prompting pattern is displayed around the image of the hand in a mode of prompting the user;
after the appointed gesture is identified, the virtual trigger button is in a second display state, wherein the first display state is different from the second display state.
Preferably, when the scene image is a scene image of an indoor answering link, the prompting pattern comprises an answering sequence number of a user.
Preferably, the triggering mode further includes: the long-focus camera is controlled by a remote controller to collect the document outline; and the control instruction sent by the remote controller controls the display interface of the display screen through the acquisition equipment.
Preferably, the document comprises a plurality of pages of content, and when the wide-angle camera recognizes a gesture instruction for collecting each page of content, the current page content of the collected document is displayed on a display interface, and the collected pages of the document are displayed in a designated display area of the display interface.
Preferably, the acquisition device is integrated with a microphone array, the method comprising:
locating a position of a user making a voice in the room;
controlling the acquisition equipment closest to the position of the user to acquire the voice of the user;
and controlling other appointed acquisition devices or electronic devices with the audio playing function indoors to carry out sound amplification.
Preferably, the method comprises:
receiving a video acquisition instruction input by a user to a position where a certain designated carrier is located;
controlling the tele camera to continuously acquire real-time change video images of the documents on the appointed carrier;
and transmitting the change video image to a designated display screen for live broadcast display in real time.
In a second aspect, the present invention also provides an indoor visual document collection device, where the device includes:
the wide-angle camera control module is used for controlling the wide-angle camera to shoot so as to acquire a scene image, and the scene image comprises: at least one document outline;
positioning and segmenting each document contour image from the scene image;
the acquisition command queue acquisition module is used for acquiring an acquisition command queue, wherein the acquisition command queue comprises at least one acquisition command, and each acquisition command comprises position information of a designated document contour;
the document image acquisition module is used for controlling the tele camera to acquire a specified document corresponding to the position information according to the acquisition command queue, and performing image processing to obtain a document image;
and the first display module is used for sending the document image to a first display area for display.
In a third aspect, the present invention also provides an indoor visual document collection system, including: at least one processor, at least one memory, and computer program instructions stored in the memory, which when executed by the processor, implement the method of the first aspect.
In a fourth aspect, the invention also provides a medium having stored thereon computer program instructions which, when executed by a processor, implement the method of the first aspect.
The beneficial effects are that: according to the intelligent interactive acquisition method, device, system and medium for the indoor visual document, the wide-angle camera is utilized to acquire the scene image, and the document contour image is positioned and segmented from the scene image. And then controlling the tele camera to collect images at positions corresponding to the specified documents according to the collection command queue, processing the collected images to obtain document images, and finally sending the document images to a first display area for display. The process does not need any electronic equipment for students, does not need special pens and paper, and can finish the digital submission and display of the homework/works in a simple and natural interaction mode. By adopting the invention, students and teachers can concentrate on the content of the classroom teaching, and the teaching quality and the classroom efficiency can be further improved without being dispersed by complicated document digital acquisition operation. In addition, a teacher can use the system of the invention without learning, which is helpful for improving classroom efficiency, helping the teacher to master learning conditions in real time and dynamically adjusting teaching plans.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described, and it is within the scope of the present invention to obtain other drawings according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an intelligent interactive acquisition method for indoor visual documents of the invention;
FIG. 2 is a flow chart of the intelligent interactive acquisition method of indoor visual documents, which is applied to classroom teaching;
FIG. 3 is a schematic diagram of a display interface for presenting document images and desk images in accordance with the present invention;
FIG. 4 is a flow chart of a method of triggering document collection using gesture recognition in accordance with the present invention;
FIG. 5 is a diagram showing the effect of desk images in different states displayed on a display interface when the invention uses gestures to answer a question;
FIG. 6 is an effect diagram of the present invention showing the document collection process in a fly-animation manner;
FIG. 7 is a diagram showing the effect of a display interface when a plurality of documents are collected according to a collection command queue;
FIG. 8 is a block diagram of a visual document intelligent interactive acquisition device of the present invention;
FIG. 9 is a schematic diagram of a visual document intelligent interactive acquisition system of the present application;
FIG. 10 is a block diagram of the smart camera subsystem of the present application;
FIG. 11 is a block diagram of the intelligent camera embedded software system of the present application;
fig. 12 is a schematic diagram of a wireless button remote control employed in the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. In the description of the present application, it should be understood that the terms "center," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like indicate orientations or positional relationships based on the orientation or positional relationships shown in the drawings, merely to facilitate describing the present application and simplify the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present application. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element. If not conflicting, the embodiments of the present application and the features of the embodiments may be combined with each other, which are all within the protection scope of the present application.
Example 1
As shown in fig. 1, the present embodiment provides an indoor visual document intelligent interactive acquisition method, which includes:
s1: controlling a wide-angle camera of an acquisition device to capture a scene image, the scene image comprising: at least one document outline;
for example, in classroom teaching, a wide-angle camera of the acquisition device is controlled to shoot a scene of a classroom to obtain a scene image of a teaching site. When the wide-angle camera is adopted for shooting, the whole scene of a classroom is often shot, and the acquired scene image comprises images such as document outlines and the like written by all students on a desk. In order to comprehensively acquire the document outlines written by all students, a wide-angle phase real-time continuous machine can be controlled to shoot classroom overlooking scene images.
And for example, in a conference, a wide-angle camera of the acquisition equipment is controlled to shoot the whole scene of the conference site to obtain a scene image of the conference site. The acquired scene images comprise images such as documents written by the participants on desks. In order to comprehensively acquire the document outline written by each participant, the wide-angle phase real-time continuous machine can be controlled to shoot the overlooking scene image of the meeting place.
S2: locating and segmenting each of the document contour images 505 from the scene image;
This step processes the scene image acquired in the previous step, locates each document in the scene image using the AI algorithm to obtain the position of each document in the photographed scene, and segments each document contour image 505 from the scene image. Because the wide-angle camera has a large field of view, the shot scene images are wide in coverage, and each local image in the scene images is not a clear image, the step only cuts out the document outline image 505, so that the accurate positioning of each document can be realized quickly, the data processing amount is reduced, and the speed and the efficiency of document interaction are improved.
In the acquired scene images, the individual document images may be located on the same carrier. For example, in a daily meeting, each participant writes on the same conference table, and the outline of each document can be directly segmented from the scene image.
In the acquired scene images, it is also possible that the individual document images are located on different carriers. For example, in a classroom teaching scenario, documents written by individual students are located on the students' respective desks 104. As shown in fig. 2, in addition to the direct segmentation of the contours of individual documents from the scene image, the following approach can be used for this case:
S6: and positioning and dividing a plurality of carrier images 504 corresponding to the carriers respectively carrying the documents from the scene images, and adjusting the relative position relationship of each carrier image 504 when displayed in the second display area to be consistent with the relative position relationship of each carrier in the room.
For example, in a teaching scenario, 1 or more desks are located and segmented from the scene image using the AI algorithm, and unique IDs (identical for the same desk in different image frames) are generated and bound for each desk 014, and then all desk images, positional information of the desk images in the scene image, and the desk unique IDs are sent to a device that can control the display of the images.
As shown in fig. 3, the second display area displays an image of each document carrier according to the relative positional relationship of each document carrier in the scene image. And enabling the position relation of each document carrier in the displayed image to be consistent with the actual relative position relation of each document carrier in the classroom. Therefore, the user can quickly and accurately find the position of the document carrier in the second display area according to the position of the user in the actual scene, so that the interactive information related to the user can be conveniently acquired in classroom teaching.
For example, desk images are displayed in real time in a desk view area (a second display area), and the relative positions of a plurality of intelligent camera subsystems in a classroom space and the relative positions of a plurality of desks in a single camera subsystem are integrated, so that the relative display positions (such as determinant and grouping surrounding desk type) among all desk images in the whole classroom are consistent with the actual space placement relative positions of the classroom desks, students can intuitively find the real-time images of the desks displayed on a large screen on the seats of the students, and the document collection operation is convenient. In order to facilitate the students to quickly and accurately find the position of their desks in the second display area, the name or number of the student is displayed immediately below each desk image.
Wherein the student name may be obtained in several ways:
1) From a seating chart, is mainly suitable for classes with fixed seats. The seating chart file records the spatial relative relationship between student names and seats. The classroom visual interactive acquisition and play software binds names with desk views based on the spatial relative relationship between seats. The seat list can be manually filled, or each student can put paper written with name/number ID on own desk, classroom visual interactive acquisition and playing software provides a name/number ID identification button and a menu, acquisition, identification and storage of name/number ID are completed under unified command of teacher, and operation is performed once after each adjustment of seat.
2) From the name card. The student places a handwritten name card (containing a name and a academic ID for example) in a preset area (for example, the upper right corner of the desk), and when the classroom visualization interactive acquisition and playing software needs to know the name of the desk, triggers the intelligent camera subsystem to aim at the name card to shoot and identify the name card to obtain the name or the academic ID.
3) The name/number ID field from the document is used for OCR recognition of the document, and the name/number ID field in the recognition result is matched to obtain the name/number ID.
S3: acquiring an acquisition command queue, wherein the acquisition command queue comprises at least one acquisition command, and each acquisition command comprises position information of a designated document contour;
the actual step can execute collection of the specified document according to the received collection command, and in order to realize that the collected document is the document specified by the user, the embodiment adds the position information of the outline of the specified document into the collection command, and accurately and quickly finds the specified document to be collected by the collection command through the position information of the outline of the specified document during subsequent collection. The acquisition commands can be arranged according to the sequence of the respective triggers to form an acquisition command queue.
Wherein the acquisition command may be triggered in a number of ways. In this regard, in the present embodiment, the method further includes, before step S1:
s01: the document contour acquisition triggering step comprises one of the following triggering modes:
s011: controlling the wide-angle camera to acquire whether specified gestures exist on the carrier and in a preset range around the carrier in the scene image in real time;
for example, whether the palm is placed at certain designated positions (the left upper corner, the right upper corner and the like of a desk) of a certain desk can be detected, and when the palm is detected to be placed at certain designated positions of the desk, the tele camera is controlled to acquire the document image on the desk.
As shown in fig. 4, the controlling the wide-angle camera to acquire, in real time, whether the specified gesture exists in the preset range on and around the carrier in the scene image includes:
s0111: controlling to superimpose and display a virtual trigger button to a designated area on the carrier image 504, wherein the virtual trigger button is in a first display state;
as shown in fig. 5, the circle in the upper left corner of the leftmost desk image in the figure represents a virtual trigger button.
S0112: when the wide-angle camera recognizes that a specified gesture is on the carrier, the hands of the user on the carrier are collected, and in a carrier image 504 comprising an image of the hands, a prompt pattern is displayed around the image of the hands in a manner to prompt the user;
As shown in the desk in the middle position of fig. 5, when a student needs to upload a written document, the student only needs to place a hand at the virtual trigger button position in the figure, and when the wide-angle camera recognizes that the student places the hand at the virtual trigger button position of the desk, the desk image displays an effect pattern 903 of lighting the virtual trigger button around the hand, so that the user is prompted to successfully trigger the collection of the document image on the desk.
S0113: after the appointed gesture is identified, the virtual trigger button is in a second display state, wherein the first display state is different from the second display state.
As shown in the rightmost desk in fig. 5, when the student successfully triggers the acquisition of the document image, the student does not need to keep a preset gesture, and can move hands from the virtual trigger button position, and when the hands of the user move, the circle on the upper corner of the desk becomes a lighting state.
The triggering method can be applied to a classroom answering link to activate the classroom atmosphere and mobilize the enthusiasm of students to participate in the classroom. The system collects and displays student documents according to the time sequence of the student submitting answers. The classroom visual interactive acquisition and play software provides a 'preemptive answer' button and a menu, a teacher clicks the 'preemptive answer', the classroom visual interactive acquisition and play software sends a 'preemptive answer' instruction to all intelligent camera subsystems, the intelligent camera subsystems start a desk real-time image gesture recognition algorithm module, and a document acquisition queuing instruction of the desk is generated immediately after a preset gesture is detected in any desk image. As shown in fig. 5, the preferred gesture detection area is the upper left corner of the desk, the preferred gesture is a state that five fingers are drawn together and straightened to press the virtual answering button, the preferred upper left corner of the desk image is superimposed to display the virtual answering button, the virtual button is in a non-pressed state 901, the state of standing the virtual button after the gesture is detected is updated to be in a pressed state 904 (for example, a lighting effect is displayed, and the answering ranking 905 is displayed), and the pressed state is kept until a teacher switches to answer to the next question, and the student does not need to keep the gesture state all the time.
As a preferable mode, the virtual answering button can be displayed at the upper right corner of the desk, the right hand of the student writes the word for answering, the left hand 902 naturally stretches to answer after the writing is finished and presses the virtual button at the upper left corner (placed in the middle), the answering ranking can be visually seen by looking up the large screen after the operation is finished, the mode accords with the humanized natural interaction principle, the whole process is natural and smooth, and the user experience is obviously improved. Of course, the virtual answering button can be displayed at the upper right corner of the desk for students who write with left hand.
In addition, since the intelligent camera subsystem has operation delay from detection of the answering gesture to completion of document shooting, students may answer by utilizing the loophole, that is, answer by pressing a virtual button a little time in advance when answering is not completed completely, answer is continued in the process of aiming the tele camera, and the tele camera aims at the front hand of a desk and the pen leaving the paper.
In this regard, the method further includes the following steps after the pickup gesture triggers the document image acquisition:
detecting whether an image of a hand holding a pen exists in a document outline area on a carrier corresponding to the answering gesture;
if yes, canceling the collection work of the document image on the carrier;
If not, collecting the appointed document according to the collection command queue.
The corresponding measure for avoiding the student to answer the answer by utilizing the loophole is that for the desk triggered by the loophole, before the document collection is completed, the intelligent camera subsystem detects whether a hand holding a pen appears in the document outline image area, and if the hand holding the pen is detected, the un-collection and the ongoing collection operation of the desk are canceled.
S012: and receiving an input instruction of a user for designating a carrier in the scene image.
When the triggering mode is adopted, a teacher clicks a desk image of a target student through a touch screen or a mouse cursor, the classroom visual interactive acquisition and playing software sends a document acquisition instruction with a desk unique ID to the target intelligent camera subsystem (the classroom visual interactive acquisition and playing software stores the corresponding relation between the desk unique ID and the intelligent camera unique ID), and the intelligent camera subsystem completes document acquisition and sends documents. The foregoing manner enables the teacher to collect the document image of the target student in a roll-call manner.
S013: and receiving an operation instruction of a user on the virtual function control key of the third display area 503 on the display interface.
Virtual function control keys may be displayed in a certain area of the display interface, for example, virtual function control keys such as "random", "total-collection" are displayed in the lower right corner area of the display interface in fig. 3.
The teacher clicks the 'one-key total collection', and the classroom visual interactive acquisition and playing software sends a 'one-key total collection' instruction to all intelligent camera subsystems, and the intelligent camera subsystems complete document acquisition and sending. Preferably, the intelligent camera subsystem and/or the classroom podium computer perform OCR recognition on the document, compare and score the recognition result with a standard answer preset by a teacher, and display the statistical result through classroom visual interactive acquisition and playing software.
In order to ensure equal class presentation opportunities of each student, a fair and random mode is needed to present the homework or works of the students, a classroom visual interactive acquisition and playing software provides a random acquisition button and a menu, a teacher clicks the random acquisition, the classroom visual interactive acquisition and playing software randomly selects 1 or more desks, a document acquisition instruction with a unique ID of each desk is sent to a target intelligent camera subsystem, and the intelligent camera subsystem completes document acquisition and sending. By adopting the mode, each student can obtain fair classroom participation and presentation opportunities, and enthusiasm of the students to participate in the classroom is invoked.
S4: controlling a tele camera of the acquisition equipment to acquire a specified document corresponding to the position information according to the acquisition command queue, and performing image processing to obtain a document image;
When the position of the designated document is determined, the tele camera can be controlled to collect the document image at the corresponding position. The embodiment can collect one page of document of one user, collect multiple pages of documents of one user, and collect one or more pages of documents of multiple users. Because the long-focus camera is turned from aiming at one desk to aiming at another desk, a certain time delay t is provided, when the interval of the acquisition commands generated before and after is smaller than t, the acquisition commands are queued, camera control software maintains an acquisition command queue, the newly generated acquisition commands are added into the tail of the acquisition command queue, and the camera control software takes out from the queue head and executes the next acquisition command after completing one acquisition command until the queue is empty.
S5: and sending the document image to a first display area for display.
The first display area is a document image display area, and the display area may be located on a display screen of the display device or may be located in a projection display area of the projection device.
As shown in fig. 3, as a preferred embodiment, the first display area 501 and the second display area 502 are located on the same display interface in this embodiment, and the step S4: controlling a tele camera to collect a specified document corresponding to the position information according to the collection command queue, and performing image processing to obtain a document image comprises the following steps:
When a document image of a specified document corresponding to the position information on a specified carrier of the second display area 502 is acquired, the document image of the specified document is displayed on the display interface so as to enter the first display area 501 in an animated manner along a specified path.
As shown in fig. 6, for example, when a document image on a desk of the name 3 is acquired, the document image flies from the desk of the name 3 into the first display area along a path indicated by a broken-line arrow in the drawing on the display interface, and a clear document image is displayed in the first display area.
Because the first display area 501 and the second display area 502 are located on the same display interface, the carrier image 505 and the document image are also located on the same display interface, so that a user can clearly see the specific content of the collected document through the same display interface, and can conveniently see the interactive information such as the document collection progress, the document collection sequence and the like displayed on the corresponding carrier image 505.
In the collecting process, the collecting process of the document image is vividly displayed in an animation mode, so that students can intuitively see the collecting process of the document image, and students can quickly find the corresponding relation between the desk position and the display position of the document image.
In this embodiment, when document images on a plurality of desks need to be acquired, the step S4: controlling the tele camera to collect the appointed document corresponding to the position information according to the collection command queue, and performing image processing to obtain a document image further comprises the following steps:
and when the collection command queue comprises more than two collection commands, the collection progress of the designated documents corresponding to the collection commands on the carriers and the queuing sequence numbers of the designated documents on the carriers are displayed on the second display area.
As shown in fig. 7, to facilitate the teacher and student to view the ongoing acquisition progress, each smart camera may be queried periodically for ongoing acquisition commands and data in the acquisition command queue, and the status of the commands may be displayed on the desk, including "being acquired", "acquired", and queuing numbers, represented by the on-acquisition highlight pattern 702, the acquired flag 701, and the queuing number flag 703, respectively.
In this embodiment, the collection and display of the document image may also be controlled using a wireless button remote controller.
The wireless button remote controller can be matched with the podium computer, and the background non-interface startup self-starting service receives key messages and then sends document acquisition messages to the intelligent camera subsystem.
As a preferable mode, the tele camera is controlled by a remote controller to collect the document outline; and the control instruction sent by the remote controller controls the display interface of the display screen through the acquisition equipment. The remote control is paired with any smart camera subsystem when implemented. The platform computer is provided with a background interface-free startup self-starting service, the self-starting service establishes network connection with document image sending protocol software of the intelligent camera subsystem and is in a document image receiving state, and the document image is automatically popped up to display the document image as soon as the document image is received, so that a teacher can start document acquisition and display under any state (such as a PPT playing state) of the platform computer startup without software startup or interface switching, and the collection and display of the document can be started only by pressing a button of a wireless remote controller.
Because the Bluetooth keyboard connection function of the podium computer is not the standard function, the pairing of the wireless button remote controller and the podium computer requires the podium computer to be provided with an additional wireless receiving module; because the podium computer is a common computer, there may be multiple software capturing keyboard inputs at the same time and conflicts that make the functions unavailable. And the wireless button remote controller is matched with the intelligent camera subsystem, so that an additional wireless receiving module is not required to be installed for the podium computer, and the conflict with other software can be avoided. When the system is installed, the wireless button remote controller can be paired with 1 or more intelligent document camera subsystems in the same teaching room, and when the wireless button remote controller is used, the wireless button remote controller can be connected with any one of the paired intelligent document camera subsystems after being started, so that the wireless button remote controller is not in interference with the use, because all the intelligent document camera subsystems and the podium computer are in the same local area network, any intelligent document camera subsystem which receives the wireless button message can broadcast the button message to other nodes in the local area network, and trigger the intelligent document camera subsystem to execute corresponding actions.
In some cases, each student needs to submit multiple pages of content, such as the front and back of an examination paper, and multiple pages of a exercise book, each time a process is collected. The document interaction method of the embodiment also provides a method for realizing collection of the multi-page document. In this embodiment, the document may include a plurality of pages of content, and when the wide-angle camera recognizes a gesture instruction for capturing each page of content, the current page of content of the captured document is displayed on the display interface and the number of captured pages of the document is displayed in a designated display area of the display interface.
For example, in the process of collecting documents, the user turns to the content which is not collected yet in the first page, the collection is triggered by gestures, the state of view display of a desk for watching a large screen is probably "in line (with sequence number)", "collecting" or "collected", when the state is changed to "collected", the content is turned to the next page, and then the collection is triggered by gestures, so that the collection of all the document pages is completed sequentially.
Example 2
In class teaching or large conference summarization, when students or participants of the current row answer questions or speak orally, the students or participants of the later row may not hear, and when the students or participants of the later row answer questions or speak orally, the students, teachers or participants of the earlier row may not hear. In addition, for language lessons, a teacher needs to record the voice answered by the student for repeated playback in order to correct the student's pronunciation.
This embodiment is further modified on the basis of the previous embodiment.
In this embodiment, the collecting device is integrated with a microphone array, and the method includes:
s71: the location of the user making the speech is located within the room.
S72: controlling the acquisition equipment closest to the position of the user to acquire the voice of the user;
the pick-up direction of the microphone is automatically set according to the desk direction of the question respondent, and the choice of the question respondent can be specified by clicking a desk by a teacher or can be randomly selected by a system. The image recognition algorithm can also automatically determine the adaptation orientation by detecting the standing posture of the student from the image shot by the wide-angle camera.
S73: and controlling other appointed acquisition devices or electronic devices with the audio playing function indoors to carry out sound amplification.
In this embodiment, the audio playing function (the sound field points to the vertical classroom floor) integrated by the smart camera subsystem may be utilized, and the sound amplification may also be performed by using other electronic devices with the audio playing function. The teacher selects whether to start the audio function and whether to start the recording function or the amplifying function through the classroom visual interactive acquisition and playing software. At the same time, only one intelligent camera subsystem starts the pickup function, and other results in the local area network are in a broadcast receiving state. If the voice data is played synchronously in real time by the node in the broadcast receiving state, the teacher specifies the audio playing enabling state of each node type of the system network for the situation that the podium computer is connected with the voice amplifying system, and the voice data can be amplified by the podium computer, the intelligent camera subsystem and all the nodes. Preferably, the smart camera subsystem in the recording state does not play the audio recorded by itself in real time so as to avoid self-excitation howling. In addition, voice recognition algorithms can be used to convert the sound recordings into text for storage or presentation.
In the traditional classroom teaching activities, teachers often answer students on blackboard according to teaching needs. But the back and forth of students from seat to podium is prone to waste valuable teaching time. Further improvements are made to this embodiment on the basis of the foregoing embodiments. The acquisition method comprises the following steps:
s81: receiving a video acquisition instruction input by a user to a position where a certain designated carrier is located;
in the implementation, a teacher or a system can randomly select a live desk, and each intelligent camera subsystem can only designate one live desk at most.
S82: controlling the tele camera to continuously acquire real-time change video images of the documents on the appointed carrier;
and after the live desks are determined, controlling the tele camera to aim and lock the designated student desks, and entering a continuous image acquisition state.
S83: and transmitting the change video image to a designated display screen for live broadcast display in real time.
The intelligent camera subsystem compresses the image into a real-time video stream and pushes the real-time video stream to the classroom visual interactive acquisition and play software, the classroom visual interactive acquisition and play software plays the real-time video stream, and other students can watch the answering or demonstration process through a large screen.
In this embodiment, the foregoing audio playing and video playing may be performed synchronously, for example, when only 1 student is performing a presentation, the video live broadcast function and the audio live broadcast function may be performed simultaneously. The system may also store live audio/video in the medium for use as material or material.
Example 3
Referring to fig. 8, the embodiment provides an indoor visual document collection device, which includes:
the wide-angle camera control module is used for controlling the wide-angle camera to shoot so as to acquire a scene image, and the scene image comprises: at least one document outline;
locating and segmenting each of the document contour images 505 from the scene image;
the acquisition command queue acquisition module is used for acquiring an acquisition command queue, wherein the acquisition command queue comprises at least one acquisition command, and each acquisition command comprises position information of a designated document contour;
the document image acquisition module is used for controlling the tele camera to acquire a specified document corresponding to the position information according to the acquisition command queue, and performing image processing to obtain a document image;
and the first display module is used for sending the document image to a first display area for display.
Example 6
In addition, the method for collecting the indoor visual document according to the embodiment of the invention can be implemented by an indoor visual document collecting system, comprising the following steps: at least one processor, at least one memory, and computer program instructions stored in the memory that, when executed by the processor, implement the method as described in any of the preceding embodiments.
As shown in fig. 9, the visual document collection system of the present embodiment includes 1 or more smart camera subsystems 100, wherein each smart camera subsystem 100 includes at least 1 wide-angle camera, and 1 tele camera that can adjust the azimuth of the sighting target. As a preferred embodiment, the optical axis of the wide-angle camera lens is perpendicular to the classroom floor or the conference room floor in this example. By adopting the setting mode, the wide-angle camera can comprehensively acquire the whole scene image of a classroom or a conference place, and the images of the desk and the document in the images are closer to the actual situation, so that the rapid segmentation and positioning of the desk and the document images can be realized later.
As shown in fig. 10, the intelligent camera subsystem 100 further includes an industrial personal computer motherboard with a network interface, the wide-angle camera and the telephoto camera are connected to the industrial personal computer motherboard, and the industrial personal computer motherboard runs an embedded software system 300, as shown in fig. 11, where the embedded software system 300 includes a general control and interface module, an image processing and AI algorithm module, and a camera control module.
As shown in fig. 9, the aforementioned smart camera subsystem 100 may be suspended at the top of a classroom. The visual document collection system of the present embodiment further includes a podium computer 103, a shared large screen (projection screen or LCD display), a wired network switch, or/and a wireless AP101. The camera subsystem and the podium computer form a local area network through a fully wireless connection or a fully wired connection or a part of wired connection and a part of wireless connection, and can mutually send and receive data and messages through a network protocol (such as TCP/UDP). The podium computer 103 may be a PC deployed independently or a virtual host running in a private/public cloud, with the display output of the podium computer on a shared large screen. Where a podium computer and shared large screen may also employ an integral interactive electronic whiteboard 102.
In this embodiment, an application program for executing the foregoing method, such as classroom visual interactive acquisition and playback software, may be run on a podium computer. The program is connected with a general control and interface module of all intelligent camera subsystems in the local area network. The program can send a document collection and status query instruction when running, and receive and display document images. The classroom visual interactive acquisition and play software stores the unique IDs (such as the fixed IP address or host name of the intelligent camera) of all the intelligent cameras in the same classroom and the same local area network, and the spatial relative position relationship of the intelligent cameras in the classroom.
The industrial personal computer main board of the intelligent camera subsystem runs image processing and AI algorithm and camera control software, the wide-angle real-time continuous machine is controlled to shoot classroom overlook scene images, the AI algorithm locates and divides 1 or more desks from the images, unique IDs (the unique IDs of the same desks in different image frames) are generated and bound for each desk, and then position information of all desk images and desk images in a scene image and the unique IDs of the desks are sent to classroom visual interactive acquisition and playing software.
The industrial personal computer main board runs image processing and AI algorithm software and camera control software, controls the tele camera, aims at a specified desk to shoot a high-definition image of the desk, performs perspective correction and document segmentation on the desk image to obtain a document image, and sends the document image to classroom visual interactive acquisition and playing software through the general control and interface module.
In this embodiment, the indoor visual document collection system further includes a wireless button remote controller, and the remote controller is paired with any one of the smart camera subsystems. The wireless button remote controller integrates a system state indicator lamp to indicate the state of the system, so that a teacher presses a button to send a document acquisition instruction when the system is not available and cannot obtain expected feedback. Preferably, an indicator light of one color is used to indicate the connection status of the wireless button remote control to the smart camera subsystem. The status of the smart camera subsystem is indicated by the indicator lights 801 of additional colors, and the status of the background no-interface power on self-starting service. Because the podium computer is a public computer, the background non-interface startup self-starting service is probably not available due to the side effect operation of the third party software. Wherein the arrangement of the operation keys of the wireless button remote controller is shown in fig. 12, an exit key 802, a random key 803, a total key 805, an answer key 804, a multi-page key 806, and a main screen key 807 are arranged on the wireless button remote controller,
Wherein the function implemented by the exit button 802 is to end/cancel the ongoing acquisition operation; wherein the function implemented by the random key 803 is to randomly select a desk to collect; wherein the answering key 804 is used for collecting all desks; wherein the function implemented by the total key 805 is to enter the preemptive acknowledge commit state; wherein the function implemented by the multi-page key 806 is to enter a multi-page commit state; the function realized by the main screen button 807 is to call out classroom visual interactive acquisition and play software;
the indoor visual document collection system of the present embodiment further includes a microphone array that may be integrated into the smart camera subsystem. In addition, the indoor visual document acquisition system also comprises a sound amplifying device or sound amplifying equipment.
The acquisition system is simple to operate, a teacher can use the acquisition system without learning, the classroom efficiency is improved, the teacher is helped to master the learning condition in real time, and the teaching plan is dynamically adjusted. And through the normalized use of the acquisition system of this scanning, can accumulate the personal classroom and learn big data for the student, for based on AI and provide big data basis to the teaching of the person's stock.
Example 7
In addition, in combination with the indoor visual document collection method in the above embodiment, the embodiment of the invention may be implemented by providing a computer readable medium. The computer readable medium has stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement any of the indoor visual document collection methods of the above embodiments.
The method, the device, the equipment and the medium for collecting the indoor visual document provided by the embodiment of the invention are described in detail.
It should be understood that the invention is not limited to the particular arrangements and instrumentality described above and shown in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and shown, and those skilled in the art can make various changes, modifications and additions, or change the order between steps, after appreciating the spirit of the present invention.
The functional blocks shown in the above-described structural block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this disclosure describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, or may be performed in a different order from the order in the embodiments, or several steps may be performed simultaneously.
In the foregoing, only the specific embodiments of the present invention are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present invention is not limited thereto, and any equivalent modifications or substitutions can be easily made by those skilled in the art within the technical scope of the present invention, and they should be included in the scope of the present invention.
Claims (12)
1. An intelligent interactive acquisition method for indoor visual documents is characterized by comprising the following steps:
s1: controlling a wide-angle camera of an acquisition device to capture a scene image, the scene image comprising: at least one document outline;
S2: positioning and segmenting each document contour image from the scene image;
s3: acquiring an acquisition command queue, wherein the acquisition command queue comprises at least one acquisition command, and each acquisition command comprises position information of a designated document contour;
s4: controlling a tele camera of the acquisition equipment to acquire a specified document corresponding to the position information according to the acquisition command queue, and performing image processing to obtain a document image;
s5: the document image is sent to a first display area for display;
the method further comprises, between steps S2 and S3:
s6: positioning and dividing a plurality of carrier images corresponding to the carriers respectively carrying the document from the scene image, and adjusting the relative position relationship of each carrier image when being displayed in a second display area to be consistent with the relative position relationship of each carrier in a room;
the first display area and the second display area are located on the same display interface, and the step S4 is that: controlling a tele camera to collect a specified document corresponding to the position information according to the collection command queue, and performing image processing to obtain a document image comprises the following steps:
when a document image of a specified document corresponding to the position information on a specified carrier of a second display area is acquired, the document image of the specified document is displayed on the display interface so as to enter the first display area for display in an animated manner along a specified path.
2. The method according to claim 1, wherein said S4: controlling the tele camera to collect the appointed document corresponding to the position information according to the collection command queue, and performing image processing to obtain a document image further comprises the following steps:
and when the collection command queue comprises more than two collection commands, the collection progress of the designated documents corresponding to the collection commands on the carriers and the queuing sequence numbers of the designated documents on the carriers are displayed on the second display area.
3. The method according to claim 1, characterized in that it further comprises, before step S1:
s01: the document contour acquisition triggering step comprises one of the following triggering modes:
controlling the wide-angle camera to acquire whether specified gestures exist on the carrier and in a preset range around the carrier in the scene image in real time;
receiving an input instruction of a user for designating a carrier in the scene image;
and receiving an operation instruction of a user on a virtual function control key of a third display area on the display interface.
4. A method according to claim 3, wherein said controlling the wide angle camera to acquire in real time whether there is a specified gesture on and around the carrier in the scene image within a preset range comprises:
Controlling and superposing a virtual trigger button to a designated area on the carrier image, wherein the virtual trigger button is in a first display state;
when the wide-angle camera recognizes that a specified gesture exists on the carrier, the hand of the user on the carrier is collected, and in a carrier image comprising an image of the hand, a prompting pattern is displayed around the image of the hand in a mode of prompting the user;
after the appointed gesture is identified, the virtual trigger button is in a second display state, wherein the first display state is different from the second display state.
5. The method of claim 4, wherein the alert pattern comprises a user's preemption sequence number when the scene image is a scene image of an indoor preemption link.
6. The method of claim 4, wherein the triggering means further comprises: the long-focus camera is controlled by a remote controller to collect the document outline; and the control instruction sent by the remote controller controls the display interface of the display screen through the acquisition equipment.
7. The method of any one of claims 1 to 6, wherein the document includes a plurality of pages of content, and when the wide angle camera recognizes a gesture instruction to capture each page of content, displaying the current page of content of the captured document on a display interface and displaying the number of pages captured by the document on a designated display area of the display interface.
8. The method of claim 1, wherein the acquisition device is integrated with a microphone array, the method comprising:
locating a position of a user making a voice in the room;
controlling the acquisition equipment closest to the position of the user to acquire the voice of the user;
and controlling other appointed acquisition devices or electronic devices with the audio playing function indoors to carry out sound amplification.
9. The method according to claim 1 or 8, characterized in that the method comprises:
receiving a video acquisition instruction input by a user to a position where a certain designated carrier is located;
controlling the tele camera to continuously acquire real-time change video images of the documents on the appointed carrier;
and transmitting the change video image to a designated display screen for live broadcast display in real time.
10. An indoor visualization document collection device, the device comprising:
the wide-angle camera control module is used for controlling the wide-angle camera to shoot so as to acquire a scene image, and the scene image comprises: at least one document outline;
positioning and segmenting each document contour image from the scene image; positioning and dividing a plurality of carrier images corresponding to the carriers respectively carrying the document from the scene image, and adjusting the relative position relationship of each carrier image when being displayed in a second display area to be consistent with the relative position relationship of each carrier in a room;
The acquisition command queue acquisition module is used for acquiring an acquisition command queue, wherein the acquisition command queue comprises at least one acquisition command, and each acquisition command comprises position information of a designated document contour;
the document image acquisition module is used for controlling the tele camera to acquire a specified document corresponding to the position information according to the acquisition command queue, and performing image processing to obtain a document image;
the first display module is used for sending the document image to a first display area for display;
the first display area and the second display area are positioned on the same display interface, and the document image acquisition module comprises: the method for controlling the tele camera to collect the appointed document corresponding to the position information according to the collection command queue, and performing image processing to obtain a document image comprises the following steps: when a document image of a specified document corresponding to the position information on a specified carrier of a second display area is acquired, the document image of the specified document is displayed on the display interface so as to enter the first display area for display in an animated manner along a specified path.
11. An indoor visualization document collection system, comprising: at least one processor, at least one memory, and computer program instructions stored in the memory, which when executed by the processor, implement the method of any one of claims 1-9.
12. A medium having stored thereon computer program instructions, which when executed by a processor, implement the method of any of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111081077.2A CN113794824B (en) | 2021-09-15 | 2021-09-15 | Indoor visual document intelligent interactive acquisition method, device, system and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111081077.2A CN113794824B (en) | 2021-09-15 | 2021-09-15 | Indoor visual document intelligent interactive acquisition method, device, system and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113794824A CN113794824A (en) | 2021-12-14 |
CN113794824B true CN113794824B (en) | 2023-10-20 |
Family
ID=79183539
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111081077.2A Active CN113794824B (en) | 2021-09-15 | 2021-09-15 | Indoor visual document intelligent interactive acquisition method, device, system and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113794824B (en) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH02127762A (en) * | 1988-11-08 | 1990-05-16 | Nippon Telegr & Teleph Corp <Ntt> | Document picture editing processor |
JP2015026189A (en) * | 2013-07-25 | 2015-02-05 | シャープ株式会社 | Image processing system, portable terminal device, display device, and computer program |
CN105930311A (en) * | 2009-02-18 | 2016-09-07 | 谷歌公司 | Method Of Executing Actions Correlated With Reproduction Document, Mobile Device And Readable Medium |
CN207946952U (en) * | 2016-07-31 | 2018-10-09 | 北京华文众合科技有限公司 | A kind of tutoring system and painting and calligraphy tutoring system |
CN208424595U (en) * | 2018-08-08 | 2019-01-22 | 上海启诺信息科技有限公司 | Video recording archive devices and system based on writing record |
CN109274898A (en) * | 2018-08-08 | 2019-01-25 | 深圳市智像科技有限公司 | File and picture intelligent acquisition methods, devices and systems |
CN109873973A (en) * | 2019-04-02 | 2019-06-11 | 京东方科技集团股份有限公司 | Conference terminal and conference system |
CN111881861A (en) * | 2020-07-31 | 2020-11-03 | 北京市商汤科技开发有限公司 | Display method, device, equipment and storage medium |
CN112333415A (en) * | 2020-10-20 | 2021-02-05 | 深圳市前海手绘科技文化有限公司 | Method and device for demonstrating remote video conference |
CN112612361A (en) * | 2020-12-17 | 2021-04-06 | 深圳康佳电子科技有限公司 | Equipment control method, device, system, terminal equipment and storage medium |
CN113159014A (en) * | 2021-04-28 | 2021-07-23 | 深圳市智像科技有限公司 | Objective question reading method, device, equipment and storage medium based on handwritten question numbers |
CN113298022A (en) * | 2021-06-11 | 2021-08-24 | 深圳市智像科技有限公司 | Device and method for collecting indoor documents and electronic equipment |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101270780B1 (en) * | 2011-02-14 | 2013-06-07 | 김영대 | Virtual classroom teaching method and device |
US10289997B2 (en) * | 2017-01-26 | 2019-05-14 | Ncr Corporation | Guided document image capture and processing |
US20210168279A1 (en) * | 2017-04-06 | 2021-06-03 | Huawei Technologies Co., Ltd. | Document image correction method and apparatus |
-
2021
- 2021-09-15 CN CN202111081077.2A patent/CN113794824B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH02127762A (en) * | 1988-11-08 | 1990-05-16 | Nippon Telegr & Teleph Corp <Ntt> | Document picture editing processor |
CN105930311A (en) * | 2009-02-18 | 2016-09-07 | 谷歌公司 | Method Of Executing Actions Correlated With Reproduction Document, Mobile Device And Readable Medium |
JP2015026189A (en) * | 2013-07-25 | 2015-02-05 | シャープ株式会社 | Image processing system, portable terminal device, display device, and computer program |
CN207946952U (en) * | 2016-07-31 | 2018-10-09 | 北京华文众合科技有限公司 | A kind of tutoring system and painting and calligraphy tutoring system |
CN208424595U (en) * | 2018-08-08 | 2019-01-22 | 上海启诺信息科技有限公司 | Video recording archive devices and system based on writing record |
CN109274898A (en) * | 2018-08-08 | 2019-01-25 | 深圳市智像科技有限公司 | File and picture intelligent acquisition methods, devices and systems |
CN109873973A (en) * | 2019-04-02 | 2019-06-11 | 京东方科技集团股份有限公司 | Conference terminal and conference system |
CN111881861A (en) * | 2020-07-31 | 2020-11-03 | 北京市商汤科技开发有限公司 | Display method, device, equipment and storage medium |
CN112333415A (en) * | 2020-10-20 | 2021-02-05 | 深圳市前海手绘科技文化有限公司 | Method and device for demonstrating remote video conference |
CN112612361A (en) * | 2020-12-17 | 2021-04-06 | 深圳康佳电子科技有限公司 | Equipment control method, device, system, terminal equipment and storage medium |
CN113159014A (en) * | 2021-04-28 | 2021-07-23 | 深圳市智像科技有限公司 | Objective question reading method, device, equipment and storage medium based on handwritten question numbers |
CN113298022A (en) * | 2021-06-11 | 2021-08-24 | 深圳市智像科技有限公司 | Device and method for collecting indoor documents and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN113794824A (en) | 2021-12-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107316520B (en) | Video teaching interaction method, device, equipment and storage medium | |
CN109409234B (en) | Method and system for assisting students in problem location learning | |
CN109147444B (en) | Learning condition feedback method and intelligent desk lamp | |
CN105361429A (en) | Intelligent studying platform based on multimodal interaction and interaction method of intelligent studying platform | |
JP2022551660A (en) | SCENE INTERACTION METHOD AND DEVICE, ELECTRONIC DEVICE AND COMPUTER PROGRAM | |
US20130215214A1 (en) | System and method for managing avatarsaddressing a remote participant in a video conference | |
CN112652200A (en) | Man-machine interaction system, man-machine interaction method, server, interaction control device and storage medium | |
CN110136032B (en) | Classroom interaction data processing method based on courseware and computer storage medium | |
CN111738889A (en) | OMO intelligent interactive cloud classroom system supporting multiple terminals | |
CN110085072A (en) | A kind of implementation method and device of the asymmetric display in multimachine position | |
CN211289676U (en) | Desk lamp and system for assisting learning | |
CN113794824B (en) | Indoor visual document intelligent interactive acquisition method, device, system and medium | |
JP2004333525A (en) | Bidirectional communication system, server, electronic lecture method, and program | |
CN109348272A (en) | Processing method, information control center equipment and storage medium based on writing on the blackboard information | |
CN112185195A (en) | Method and device for controlling remote teaching classroom by AI (Artificial Intelligence) | |
CN111050111A (en) | Online interactive learning communication platform and learning device thereof | |
JP6810515B2 (en) | Handwriting information processing device | |
TWI726233B (en) | Smart recordable interactive classroom system and operation method thereof | |
CN115311920B (en) | VR practical training system, method, device, medium and equipment | |
US20230196632A1 (en) | Information processing device and information processing method | |
TW202016904A (en) | Object teaching projection system and method thereof | |
CN110933510B (en) | Information interaction method in control system | |
CN210072615U (en) | Immersive training system and wearable equipment | |
CN112258909B (en) | Teaching content display method and intelligent platform | |
CN115412679B (en) | Interactive teaching quality assessment system with direct recording and broadcasting function and method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |