CN113485619B - Information collection table processing method and device, electronic equipment and storage medium - Google Patents

Information collection table processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113485619B
CN113485619B CN202110790684.XA CN202110790684A CN113485619B CN 113485619 B CN113485619 B CN 113485619B CN 202110790684 A CN202110790684 A CN 202110790684A CN 113485619 B CN113485619 B CN 113485619B
Authority
CN
China
Prior art keywords
target
collection
information collection
collection table
item
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110790684.XA
Other languages
Chinese (zh)
Other versions
CN113485619A (en
Inventor
高原
王德成
段广龙
胡琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110790684.XA priority Critical patent/CN113485619B/en
Publication of CN113485619A publication Critical patent/CN113485619A/en
Application granted granted Critical
Publication of CN113485619B publication Critical patent/CN113485619B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a processing method, a processing device, electronic equipment and a storage medium of an information collection table; the method comprises the following steps: presenting an initial information collection table and an operation identification area corresponding to the information collection table; collecting an image containing user operation, and presenting the image through the operation identification area; when the user operation presented by the operation identification area is a target operation, responding to the target operation, and creating a target collection item of the type associated with the target operation in the information collection table; receiving input collection content corresponding to the target collection item, and generating a target information collection table based on the collection content; the target information collection table is used for enabling at least two users with operation authorities to edit corresponding information based on the collected content; by the method and the device, the creation mode of the information collection table can be enriched, and the creation efficiency of the information collection table is improved.

Description

Information collection table processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of information processing technologies, and in particular, to a method and apparatus for processing an information collection table, an electronic device, and a storage medium.
Background
In the related art, the creation of the information collection table is usually realized by the traditional modes of mouse cooperation, manual input, dragging and the like, the creation mode is single, the operation is complex, the creation efficiency of the collection table is low, and the hardware processing resources are wasted.
Disclosure of Invention
The embodiment of the application provides a processing method, a processing device, electronic equipment and a storage medium of an information collection table, which can enrich the creation mode of the information collection table and improve the creation efficiency of the information collection table.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a processing method of an information collection table, which comprises the following steps:
presenting an initial information collection table and an operation identification area corresponding to the information collection table;
collecting an image containing user operation, and presenting the image through the operation identification area;
when the user operation presented by the operation identification area is a target operation, responding to the target operation, and creating a target collection item of the type associated with the target operation in the information collection table;
receiving input collection content corresponding to the target collection item, and generating a target information collection table based on the collection content;
The target information collection table is used for enabling at least two users with operation authorities to edit corresponding information based on the collected content.
In the above solution, before the presenting of the initial information collection table, the method further includes:
presenting a collection table creation interface, and presenting collection table creation function items in the collection table creation interface;
the initial information collection table is created in response to a trigger operation for the collection table creation function.
In the above solution, the presenting the operation identification area corresponding to the information collection table includes:
receiving an item creation instruction for the information collection table;
and responding to the project creation instruction, and presenting an operation identification area corresponding to the information collection table.
In the above solution, the receiving an item creation instruction for the information collection table includes:
presenting an item creation function item in the information collection table, and receiving an item creation instruction for the information collection table in response to a trigger operation for the item creation function item;
or receiving a voice item creation instruction which is triggered based on a voice form and aims at the information collection table.
In the above solution, before the presenting of the initial information collection table, the method further includes:
presenting an operation setting area corresponding to the information collection table and at least two types of collection items including the target collection item;
collecting a first image containing a first user operation, and presenting the first image through the operation setting area;
and identifying the first user operation, and after the first user operation is successfully identified, responding to the selection operation for the target collection item, and associating the first user operation with the target collection item as the target operation.
In the above solution, the presenting at least two types of collection items including the target collection item includes:
presenting a drop-down function item corresponding to the collection item type;
in response to a trigger operation for the drop-down function item, at least two types of collection items including the target collection item are presented.
The embodiment of the application also provides a processing device of the information collection table, which comprises the following steps:
the presentation module is used for presenting the initial information collection table and an operation identification area corresponding to the information collection table;
The acquisition module is used for acquiring an image containing user operation and presenting the image through the operation identification area;
a creation module, configured to create, when a user operation presented by the operation identification area is a target operation, a target collection item of a type associated with the target operation in the information collection table in response to the target operation;
the generation module is used for receiving the input collection content corresponding to the target collection item and generating a target information collection table based on the collection content;
the target information collection table is used for enabling at least two users with operation authorities to edit corresponding information based on the collected content.
In the above scheme, the creating module is further configured to receive a voice creating instruction triggered by a voice manner and directed against the information collection table;
and responding to the voice creation instruction, and creating the initial information collection table.
In the above scheme, the creating module is further configured to present a collection table creating interface, and present a collection table creating function item in the collection table creating interface;
the initial information collection table is created in response to a trigger operation for the collection table creation function.
In the above scheme, the creating module is further configured to present a collection table creating interface, and present a creating template of the blank information collection table in the collection table creating interface;
in response to a template selection operation for the creation template, a blank information collection table is created as the initial information collection table.
In the above scheme, the presenting module is further configured to present, in the information collection table, a default collection item carrying default collection content and an editing function item corresponding to the default collection item;
when a trigger operation for the editing function item is received, an editing interface for editing the default collection item and corresponding default collection content is presented;
and the generation module is also used for generating a target information collection table based on the collection content and combining the default collection items edited based on the editing interface and the corresponding default collection content.
In the above scheme, the presenting module is further configured to receive an item creation instruction for the information collection table;
and responding to the project creation instruction, and presenting an operation identification area corresponding to the information collection table.
In the above scheme, the presenting module is further configured to present an item creation function item in the information collection table, and receive an item creation instruction for the information collection table in response to a trigger operation for the item creation function item;
Or receiving a voice item creation instruction which is triggered based on a voice form and aims at the information collection table.
In the above scheme, the device further includes:
the starting module is used for receiving a starting instruction of an operation creation function aiming at the information collection table;
in response to the open instruction, opening an operation creation function that creates a collection item in the information collection table by executing a target operation;
and the presenting module is further used for presenting an operation identification area corresponding to the information collection table when the operation creation function is started.
In the above scheme, the starting module is further configured to receive a starting instruction of an operation creation function for the information collection table in response to a trigger operation of the operation creation function switch for presentation;
or receiving a voice starting instruction which is triggered based on a voice form and aims at the operation creation function of the information collection table.
In the above solution, the presenting module is further configured to present, in the operation identification area, guiding information corresponding to the target operation;
wherein the guidance information is used for guiding the creation of the target collection item in the information collection table by executing the target operation.
In the above scheme, the acquisition module is further configured to present a first image recognition frame corresponding to the image in the operation recognition area in a process of performing user operation recognition on the image;
when the user operation identification is completed on the image, presenting a second image identification frame corresponding to the image in the operation identification area;
the positions of the first image recognition frame and the second image recognition frame correspond to a target area containing user operation in the image; the first image recognition frame and the second image recognition frame are different in display style.
In the above scheme, the presenting module is further configured to present creation prompt information in a process of creating the target collection item of the type associated with the target operation;
and the creation prompt information is used for prompting that the target collection item of the type associated with the target operation is being created.
In the above solution, the presenting module is further configured to turn on a voice input function for the target collection item, and
presenting a voice input interface corresponding to the target collection item;
the generation module is further used for receiving the collection content corresponding to the target collection item in the voice form, which is input based on the voice input interface.
In the above scheme, the presenting module is further configured to present, in the voice input interface, voice acquisition prompt information corresponding to the target collection item;
the voice collection prompt message is used for prompting that collection contents aiming at the target collection project are being collected;
and presenting the process of changing the style of the voice acquisition prompt information along with the input of the voice form collection content.
In the above scheme, the device further includes:
the association module is used for presenting an operation setting area corresponding to the information collection table and at least two types of collection items including the target collection item;
collecting a first image containing a first user operation, and presenting the first image through the operation setting area;
and identifying the first user operation, and after the first user operation is successfully identified, responding to the selection operation for the target collection item, and associating the first user operation with the target collection item as the target operation.
In the above scheme, the association module is further configured to present a drop-down function item corresponding to the collection item type;
In response to a trigger operation for the drop-down function item, at least two types of collection items including the target collection item are presented.
In the above scheme, the association module is further configured to present, in an operation setting area, first operation prompt information corresponding to the first user operation;
the first operation prompt information is used for prompting that the first user operation is associated with the target collection item through executing the first user operation.
In the above scheme, the association module is further configured to present second operation prompt information when the first user operation is associated with a collection item of a corresponding type;
the second operation prompt information is used for prompting that the first user operation is associated with a collection item of a corresponding type and guiding to re-execute other user operations different from the first user operation.
In the above scheme, the acquisition module is further configured to perform skin detection processing on the image, and determine a partial image that is included in the image and corresponds to a user operation;
and inputting the partial images into a machine learning model for operation prediction to obtain a prediction result of whether the user operation is the target operation.
In the above scheme, the generating module is further configured to receive a collection content corresponding to the target collection item in an input voice form;
the generating module is further used for carrying out silence detection on the collected content in a voice form to obtain a silence part in the collected content, and removing the silence part in the collected content to obtain target collected content;
segmenting the target collection content to obtain a plurality of sound frames contained in the target collection content, and extracting the characteristics of each sound frame to obtain corresponding sound characteristics;
respectively inputting sound characteristics corresponding to each sound frame into a machine learning model for voice recognition to obtain text content corresponding to each sound frame;
and generating a target information collection table based on the text content corresponding to each sound frame.
The embodiment of the application also provides electronic equipment, which comprises:
a memory for storing executable instructions;
and the processor is used for realizing the processing method of the information collection table provided by the embodiment of the application when executing the executable instructions stored in the memory.
The embodiment of the application also provides a computer readable storage medium which stores executable instructions, wherein when the executable instructions are executed by a processor, the processing method of the information collection table provided by the embodiment of the application is realized.
The embodiment of the application has the following beneficial effects:
and when the user operation presented by the operation identification area is a target operation, responding to the target operation, and creating a target collection item of the type associated with the target operation in the information collection table, so that when collection content corresponding to the input target collection item is received, a target information collection table is generated based on the collection content. Therefore, the user can realize the creation of the collection items in the information collection table by executing the target operation related to the types of the target collection items, so that the creation mode of the information collection table is enriched, the operation is simple, the creation efficiency of the information collection table is improved, and the hardware processing resources are saved.
Drawings
FIG. 1 is a schematic diagram of a processing system 100 of an information collection table according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device 500 implementing a method for processing an information collection table according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a method for processing an information collection table according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of creating an information collection table provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of creating an information collection table according to an embodiment of the present application;
FIG. 6 is a schematic representation of a presentation of an operation identification region provided by an embodiment of the present application;
FIG. 7 is a schematic representation of an operation identification region provided by an embodiment of the present application;
FIG. 8 is a schematic representation of a presentation of guidance information provided by an embodiment of the present application;
FIG. 9 is a schematic representation of a voice input interface provided by an embodiment of the present application;
FIG. 10 is a schematic diagram of a binding flow of target operations and corresponding types of target collection items provided by an embodiment of the present application;
FIG. 11A is a schematic representation of a first operation prompt provided in an embodiment of the present application;
FIG. 11B is a schematic representation of a second operation prompt provided in an embodiment of the present application;
FIG. 12 is a flowchart illustrating a method for processing an information collection table according to an embodiment of the present disclosure;
FIG. 13 is a schematic diagram of an RGB color space for one color provided by an embodiment of the present application;
FIG. 14 is a schematic view of an elliptical model provided by an embodiment of the present application;
fig. 15 is a schematic diagram of distribution of skin pixels and non-skin pixels according to an embodiment of the present application;
FIG. 16 is a schematic diagram of a convolutional neural network provided in an embodiment of the present application;
FIG. 17 is a schematic diagram of character expression recognition provided by an embodiment of the present application;
FIG. 18 is a schematic diagram of recognition of user gestures provided by embodiments of the present application;
fig. 19 is a schematic diagram of sound information segmentation provided in an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail with reference to the accompanying drawings, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the terms "first", "second", "third" and the like are merely used to distinguish similar objects and do not represent a specific ordering of the objects, it being understood that the "first", "second", "third" may be interchanged with a specific order or sequence, as permitted, to enable embodiments of the application described herein to be practiced otherwise than as illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
Before further describing embodiments of the present application in detail, the terms and expressions that are referred to in the embodiments of the present application are described, and are suitable for the following explanation.
1) And the client is used for providing various service application programs such as an instant messaging client and a video playing client which are operated in the terminal.
2) In response to a condition or state that is used to represent the condition or state upon which the performed operation depends, the performed operation or operations may be in real-time or with a set delay when the condition or state upon which it depends is satisfied; without being specifically described, there is no limitation in the execution sequence of the plurality of operations performed.
Based on the above explanation of terms and terminology involved in the embodiments of the present application, the processing system of the information collection table provided in the embodiments of the present application is described below. Referring to fig. 1, fig. 1 is a schematic architecture diagram of a processing system 100 of an information collection table according to an embodiment of the present application, in order to support an exemplary application, a terminal (a terminal 400-1 is shown in an exemplary manner) is connected to a server 200 through a network 300, where the network 300 may be a wide area network or a local area network, or a combination of the two, and a wireless or wired link is used to implement data transmission.
A terminal (e.g., terminal 400-1) for presenting an initial information collection table at a graphical interface 410 (graphical interface 410-1 and graphical interface 410-2 are exemplarily shown), and an operation recognition area corresponding to the information collection table; collecting an image containing user operation, and presenting the image through an operation identification area; transmitting the image including the user operation to the server 200;
a server 200 for receiving an image including a user operation and identifying whether the user operation included in the image is a target operation; when the user operation presented by the operation identification area is identified to be the target operation, acquiring a target collection item of a type associated with the target operation, and returning the target collection item to the terminal;
a terminal (e.g., terminal 400-1) for receiving the target collection item and creating a target collection item of a type associated with the target operation in the information collection table; and receiving the collection content corresponding to the input target collection item, and generating a target information collection table based on the collection content.
In practical applications, the server 200 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and artificial intelligence platforms. The terminal (e.g., terminal 400-1) may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart television, a smart watch, etc. The terminals (e.g., terminal 400-1) and server 200 may be directly or indirectly connected by wired or wireless communication, and the present application is not limited thereto.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an electronic device 500 implementing a processing method of an information collection table according to an embodiment of the present application. In practical applications, the electronic device 500 may be a server or a terminal shown in fig. 1, and the electronic device 500 is taken as an example of the terminal shown in fig. 1, to describe an electronic device implementing a processing method of an information collection table in an embodiment of the present application, where the electronic device 500 provided in the embodiment of the present application includes: at least one processor 510, a memory 550, at least one network interface 520, and a user interface 530. The various components in electronic device 500 are coupled together by bus system 540. It is appreciated that the bus system 540 is used to enable connected communications between these components. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to the data bus. The various buses are labeled as bus system 540 in fig. 2 for clarity of illustration.
The processor 510 may be an integrated circuit chip with signal processing capabilities such as a general purpose processor, such as a microprocessor or any conventional processor, or the like, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
The user interface 530 includes one or more output devices 531 that enable presentation of media content, including one or more speakers and/or one or more visual displays. The user interface 530 also includes one or more input devices 532, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. Memory 550 may optionally include one or more storage devices physically located remote from processor 510.
Memory 550 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The non-volatile memory may be read only memory (ROM, read Only Me mory) and the volatile memory may be random access memory (RAM, random Access Memor y). The memory 550 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 550 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks;
network communication module 552 is used to reach other computing devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 include: bluetooth, wireless compatibility authentication (WiFi), and universal serial bus (USB, universal Serial Bus), etc.;
a presentation module 553 for enabling presentation of information (e.g., a user interface for operating a peripheral device and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530;
the input processing module 554 is configured to detect one or more user inputs or interactions from one of the one or more input devices 532 and translate the detected inputs or interactions.
In some embodiments, the processing device of the information collection table provided in the embodiments of the present application may be implemented in a software manner, and fig. 2 shows the processing device 555 of the information collection table stored in the memory 550, which may be software in the form of a program, a plug-in, or the like, and includes the following software modules: the presentation module 5551, the acquisition module 5552, the creation module 5553 and the generation module 5554 are logical, and thus may be arbitrarily combined or further split according to the implemented functions, the functions of each module will be described below.
In other embodiments, the processing device of the information collecting table provided in the embodiments of the present application may be implemented by combining software and hardware, and by way of example, the processing device of the information collecting table provided in the embodiments of the present application may be a processor in the form of a hardware decoding processor, which is programmed to perform the processing method of the information collecting table provided in the embodiments of the present application, for example, the processor in the form of a hardware decoding processor may use one or more application specific integrated circuits (ASIC, application Specific Integrated Circuit), DSP, programmable logic device (PLD, programmable Logic Device), complex programmable logic device (CPLD, complex Programmable Logic Device), field programmable gate array (FPGA, field-Programmable Gate Array) or other electronic components.
Based on the above description of the processing system and the electronic device of the information collection table provided in the embodiments of the present application, the following describes a processing method of the information collection table provided in the embodiments of the present application. In some embodiments, the method for processing the information collection table provided in the embodiments of the present application may be implemented by a server or a terminal alone or in conjunction with the server and the terminal, and the method for processing the information collection table provided in the embodiments of the present application is described below by taking the terminal implementation as an example. Referring to fig. 3, fig. 3 is a flowchart of a method for processing an information collection table according to an embodiment of the present application, where the method for processing an information collection table according to an embodiment of the present application includes:
Step 101: the terminal presents an initial information collection table and an operation identification area corresponding to the information collection table.
Here, in practical applications, the terminal is installed with clients such as an instant messaging client integrated with a collection table creation function, a collection table creation client (such as a document client), and the like. And the terminal receives a client operation instruction triggered by a user, and responds to the operation instruction, and an initial information collection table is presented through the operation client. In practical implementation, the initial information collection table may be a blank information collection table created, or may be an information collection table containing part of collection content or collection items (such as name, gender, etc.), that is, when a user needs to create the information collection table, the initial information collection table may be created in the client.
In practice, the information collection table may provide the capability of information collection and conversion of collection results to a table, in particular, the initial information collection table may allow the user to create collection items (such as questions, time questions, etc.) and corresponding collection contents in the information collection table according to personal needs, such as "please ask you to vaccinate XX? "answer questions, or create a" please fill in your inoculation time is XX year, month, and day ", etc.
In the embodiment of the application, when presenting the initial information collection table, the terminal also presents an operation identification area corresponding to the information collection table, and the operation identification area identifies user operations of the user, such as expression operations, gesture operations and the like, so as to create corresponding collection items in the information collection table based on the user operations. Next, first, the creation of the initial information collection table will be described.
In some embodiments, the terminal may create an initial information collection table before presenting the initial information collection table by: receiving a voice creation instruction which is triggered in a voice mode and aims at an information collection table; in response to the voice creation instruction, an initial information collection table is created.
Here, the user can create an initial information collection table by voice. In practical applications, a client running on a terminal provides a voice creation instruction of an information collection table, where the voice creation instruction is used to trigger creation of an initial information collection table, such as "XX document, please create a blank form", etc. When the terminal receives a voice creation instruction for the information collection table, which is triggered by a voice manner, an initial information collection table is created in response to the voice creation instruction, and the initial information collection table is presented.
As an example, the function of the voice creation information collection table is automatically started at the collection table creation interface. That is, when the terminal presents the collection table creation interface, the user can voice-input a voice creation instruction corresponding to the information collection table. When the terminal receives a voice creation instruction for the information collection table, the terminal starts to create an initial information collection table in response to the voice creation instruction, and can jump to the created initial information collection table from the presented collection table creation interface.
In some embodiments, the terminal may create an initial information collection table before presenting the initial information collection table by: presenting a collection table creation interface, and presenting collection table creation function items in the collection table creation interface; in response to a trigger operation for the collection table creation function, an initial information collection table is created.
Here, the user may manually create the initial information collection table in addition to creating the initial information collection table by voice. In practical application, the terminal also provides a collection table creation interface, and provides collection table creation function items in the collection table creation interface. The user may create an initial information collection table based on the collection table creation function. Specifically, the terminal receives a trigger operation for the collection table creation function item, creates an initial information collection table in response to the trigger operation, and presents the initial information collection table.
As an example, referring to fig. 4, fig. 4 is a schematic diagram of creating an information collection table provided in an embodiment of the present application. Here, the terminal presents a collection table creation interface, and presents a collection table creation function item "create blank collection table" in the collection table creation interface, as shown in a diagram in fig. 4; in response to a trigger operation for the collection table creation function item "create blank collection table", an initial information collection table is created and presented as shown in fig. 4B.
In some embodiments, the terminal may create an initial information collection table before presenting the initial information collection table by: presenting a collection table creation interface, and presenting a blank information collection table creation template in the collection table creation interface; in response to a template selection operation for creating a template, a blank information collection table is created as an initial information collection table.
Here, the terminal may also provide the user with a plurality of information collection table creation templates, such as a blank information collection table creation template, a preset category of information collection table creation template (such as a student practice intention collection table template, a graduate practice situation collection table template, etc.), where the preset category of information collection table creation template includes collection items related to respective categories and respective collection contents, such as collection items related to student practice intentions, collection contents, etc., which are preset. When a user needs to create the information collection table of the corresponding category, the corresponding information collection table can be quickly created based on the creation template of the information collection table of the corresponding category.
In practical application, the terminal may present a collection table creation interface, and present a blank information collection table in the collection table creation interface, and when receiving a selection operation for a creation template of the blank information collection table, create the blank information collection table as an initial information collection table in response to the selection operation, and present the initial information collection table.
As an example, referring to fig. 5, fig. 5 is a schematic diagram of creating an information collection table provided in an embodiment of the present application. Here, the terminal presents a collection table creation interface, and presents creation templates of a plurality of information collection tables at the collection table creation interface, including a creation template "blank collection table" of blank information collection tables, a creation template "student practice intention collection table" of student practice intention collection tables, a creation template "graduation practice situation collection table" of graduation practice situation collection tables, as shown in a diagram in fig. 5 a; when a selection operation for a creation template of the blank information collection table is received, the blank information collection table is created as an initial information collection table in response to the selection operation, and the initial information collection table is presented as shown in fig. 5B.
In some embodiments, after the terminal presents the initial information collection table, the default collection item carrying the default collection content and the editing function item corresponding to the default collection item may also be presented in the information collection table; when a trigger operation for editing the function item is received, an editing interface for editing the default collection item and corresponding default collection content is presented;
Accordingly, the terminal may generate the target information collection table based on the collection contents by: and generating a target information collection table based on the collection content and combined with the default collection items edited based on the editing interface and the corresponding default collection content.
Here, after the terminal creates the initial information collection table, the initial information collection table may further include a default collection item carrying default collection contents, such as a default collection item "age", corresponding default collection contents "10-20 years old, 21-40 years old, 41-60 years old, 60 years old or older"; the default collection item "gender", the corresponding default collection content "male, female", etc. In this way, a part of the collection items and the collection contents can be provided to the user in advance, thereby saving the time for the user to create the collection items and collect the contents.
In practical application, corresponding editing function items can be provided for the default collection items, and when the user does not need the default collection items and the default collection contents contained in the initial information collection table or needs to modify the default collection items, the editing function items can be implemented based on the editing function items. When the terminal receives a trigger operation aiming at the editing function item, an editing interface is presented in response to the trigger operation, and a user can edit the default collection item and corresponding default collection content according to personal needs based on the editing interface. Thus, when the target information collection table is generated, the edited default collection item and the corresponding default collection content are combined to generate.
After the initial information collection table is created and the presentation description is finished, the description of the operation identification area corresponding to the information collection table is continued, and the operation identification area can be used for a user to create collection items in the information collection table by executing target operations. In some embodiments, the terminal may present the operation identification area corresponding to the information collection table as follows: receiving an item creation instruction for an information collection table; and responding to the project creation instruction, and presenting an operation identification area corresponding to the information collection table.
Here, the terminal may present the operation recognition area corresponding to the information collection table in response to the item creation instruction when the item creation instruction for the information collection table is received, so as to avoid the real-time display of the operation recognition area from decreasing the availability of the screen.
In some embodiments, the terminal may receive an item creation instruction for the information collection table by: presenting an item creation function item in the information collection table, and receiving an item creation instruction for the information collection table in response to a trigger operation for the item creation function item; or, a voice item creation instruction for the information collection table triggered based on the voice form is received.
In practical application, the terminal may receive the project creation instruction by: the terminal may present the item creation function item in the information collection table, and when a trigger operation for the item creation function item is received, an item creation instruction for the information collection table is received in response to the trigger operation. As an example, referring to fig. 6, fig. 6 is a schematic representation of an operation recognition area provided in an embodiment of the present application. Here, the terminal presents the item creation function item "add question" in the information collection table, as shown in a diagram in fig. 6; in response to a trigger operation for the item creation function item "add question", an item creation instruction for the information collection table is received, and in response to the item creation instruction, an operation identification area corresponding to the information collection table is presented, as shown in fig. 6B.
In practical application, the terminal may also receive the project creation instruction by: the terminal can provide a function of voice creation of the item, and the user can trigger a voice item creation instruction to create the collection item. When the terminal receives a voice item creation instruction which is triggered based on a voice form and aims at the information collection table, the terminal responds to the item creation instruction and presents an operation identification area corresponding to the information collection table.
In some embodiments, the terminal may turn on the operation creation function by: receiving an opening instruction of an operation creation function aiming at an information collection table; in response to the start instruction, starting an operation creation function that creates a collection item in the information collection table by executing the target operation;
accordingly, the terminal may present the operation identification area corresponding to the information collection table as follows: when the operation creation function is turned on, an operation identification area corresponding to the information collection table is presented.
Here, the terminal also provides an operation creation function, that is, a function of creating collection items in the information collection table by executing the target operation. Based on this, in practical application, the user can create a function by turning on the operation to realize the presentation of the operation recognition area. Specifically, the terminal receives an opening instruction of an operation creation function for the information collection table; in response to the start instruction, an operation creation function of creating a collection item in the information collection table by executing a target operation is started, at which time, when the operation creation function is started, an operation identification area corresponding to the information collection table is presented.
In some embodiments, the terminal may receive an on instruction for the operation creation function of the information collection table by: receiving an opening instruction of an operation creation function for the information collection table in response to a trigger operation of the operation creation function switch for presentation; or, a voice start instruction of an operation creation function for the information collection table triggered based on the voice form is received.
In practical application, the terminal may receive an opening instruction for the operation creation function of the information collection table by: the terminal presents an operation creation function switch, and when a trigger operation for the operation creation function switch is received, an opening instruction for the operation creation function of the information collection table is received in response to the trigger operation. As an example, referring to fig. 7, fig. 7 is a schematic representation of an operation recognition area provided in an embodiment of the present application. Here, the terminal presents an operation creation function switch "expression, gesture creation form" in the information collection table, as shown in a diagram in fig. 7; in response to a trigger operation of the operation creation function switch for presentation, an on instruction of the operation creation function for the information collection table is received, in response to the on instruction, the operation creation function is turned on, and an operation identification area corresponding to the information collection table is presented, as shown in fig. 7B.
In practical application, the terminal may also receive an opening instruction for the operation creation function of the information collection table by: the terminal can provide the function of the voice opening operation creation function, and the user can trigger the voice opening instruction to open the operation creation function. When the terminal receives a voice opening instruction which is triggered based on a voice form and aims at an operation creation function of the information collection table, the operation creation function of creating collection items in the information collection table through execution of target operation is started in response to the opening instruction, and at the moment, an operation identification area corresponding to the information collection table is presented when the operation creation function is started.
In some embodiments, after presenting the operation identification area corresponding to the information collection table, the terminal may also present the guiding information corresponding to the target operation in the operation identification area; wherein the guidance information is used for guiding to create a target collection item in the information collection table by executing the target operation.
Here, the terminal may also present guide information corresponding to the target operation in the operation recognition area to guide the user through the guide information to create a target collection item in the information collection table by performing the target operation. As an example, referring to fig. 8, fig. 8 is a schematic representation of presentation of guidance information provided in an embodiment of the present application. Here, the terminal presents, in the operation recognition area, guidance information corresponding to the target operation, such as: executing a 666 gesture operation to create a question and answer collection item; executing the smile expression operation creates a time topic collection item. Of course, the guiding information may be text or image, for example, a graphic schematic diagram corresponding to a gesture operation of "666" and a schematic diagram corresponding to a smile expression operation may be used.
Step 102: an image containing user operations is acquired and presented by operating the identification area.
Here, after the terminal presents the initial information collection table and the operation recognition area corresponding to the information collection table, an image including the user operation is collected, for example, the image collection may be performed by the image capturing device, and the collected image is presented through the operation recognition area. The user operation may be a gesture operation, an expression operation, etc., such as a "666" gesture operation, various numbers (e.g., 0, 1, 2, etc.) gesture operation, a smiling expression operation, a surprise expression operation, etc.
In practical applications, the image presented by the operation recognition area may be recognized to determine whether the user operation included in the image is a target operation. Here, the target operation is a target collection item associated with a corresponding type, where the type of collection item may include types of questions, time questions, single questions, picture questions, and the like, and the target operation may include gesture operation, expression operation, and the like, such as "666" gesture operation, various number (e.g., 0, 1, 2, and the like) gesture operation, smiling expression operation, surprise expression operation, and the like.
Accordingly, the "666" gesture operation may be associated with a collection item of the question-answer type, the smiling expression operation may be associated with a collection item of the time-answer type, and so on, which is not limited in the embodiment of the present application.
In some embodiments, after presenting the image through the operation recognition area, the terminal presents a first image recognition frame of the corresponding image in the operation recognition area in the process of performing user operation recognition on the image; when the user operation identification is completed on the image, a second image identification frame corresponding to the image is presented in the operation identification area; the positions of the first image recognition frame and the second image recognition frame correspond to a target area which contains user operation in the image; the first image recognition frame and the second image recognition frame are different in display style.
In practical applications, when an image presented by an operation recognition area is recognized, the recognized area may be identified by an image recognition frame, for example, the image recognition frame is presented in a target area including a user operation in the image. Specifically, in the process of performing user operation recognition on the image, a first image recognition frame of the corresponding image may be presented in the operation recognition area; when the user operation recognition on the image is completed, a second image recognition frame of the corresponding image is presented in the operation recognition area. Here, the display patterns of the first image recognition frame and the second image recognition frame are different, such as different display colors of the first image recognition frame and the second image recognition frame, and the like.
In some embodiments, the terminal may identify the image presented within the operation identification area by: performing skin detection processing on the image, and determining partial images which are contained in the image and correspond to user operation; and inputting the partial images into a machine learning model for operation prediction to obtain a prediction result of whether the user operation is the target operation.
Here, the terminal may first perform skin detection processing on the image, and determine a partial image included in the image corresponding to the user operation, and specifically, the skin detection processing may be performed by a skin detection model, such as a gaussian model, an elliptic model, a bayesian classification model, or the like. After determining the partial image corresponding to the user operation, inputting the partial image into a machine learning model for operation prediction to obtain a prediction result of whether the user operation is the target operation. The machine learning model can be constructed based on a convolutional neural network model.
Step 103: when the user operation presented by the operation identification area is a target operation, in response to the target operation, a target collection item of a type associated with the target operation is created in the information collection table.
Here, after the terminal presents an image containing a user operation through the operation recognition area and recognizes the image to determine whether the user operation contained in the image is a target operation, when the user operation presented by the operation recognition area is recognized as the target operation, a target collection item of a type associated with the target operation is acquired, thereby creating the target collection item in the information collection table.
In some embodiments, the terminal may present the creation hint information during the creation of the target collection item of the type with which the target operation is associated; and creating prompt information for prompting that the target collection item of the type associated with the target operation is being created.
Here, in the process of creating the target collection item of the type associated with the target operation, the terminal may also present creation prompt information, so as to prompt the user to create the target collection item of the type associated with the target operation based on the creation prompt information, thereby avoiding the situation that the user waits for emergency.
Step 104: and receiving the collection content corresponding to the input target collection item, and generating a target information collection table based on the collection content.
And the target information collection table is used for enabling at least two users with operation authorities to edit corresponding information based on the collected content.
Here, after the terminal creates the target collection item of the type associated with the target operation in the information collection table, the corresponding collection content needs to be input for the target collection item, for example, the created target collection item is a question, and then specific content of the question needs to be input, for example, "do you like singing? ". And after the terminal receives the input collection content of the target collection item, generating a target information collection table based on the collection content. The generated target information collection table can be used for at least two users with operation authorities to edit corresponding information based on the collected content.
In practical application, the terminal can also present an operation authority setting interface, and at least two users for selection are presented in the operation authority setting interface; in response to a selection operation for the target user, the target user is taken as a user having operation authority.
In some embodiments, after creating a target collection item of a type associated with a target operation in the information collection table, the terminal may turn on a voice input function for the target collection item and present a voice input interface for the corresponding target collection item;
accordingly, the terminal may receive the collection content corresponding to the input target collection item as follows: and receiving the collected content corresponding to the target collected item in the voice form, which is input based on the voice input interface.
Here, after creating the target collection item of the type associated with the target operation in the information collection table, the terminal may turn on the voice input function for the target collection item while presenting the voice input interface corresponding to the target collection item. Based on the above, the terminal receives the collection content corresponding to the target collection item in the voice form, which is input based on the voice input interface.
In some embodiments, the terminal may present voice acquisition prompt information corresponding to the target collection item in the voice input interface; the voice collection prompt message is used for prompting that collection contents aiming at target collection items are being collected; and presenting the process of style change of the voice acquisition prompt information along with the input of the collected content in the voice form.
In practical application, the terminal may also present a voice collection prompt message corresponding to the target collection item, such as "… … in voice recognition", in the voice input interface, so as to prompt that the collection content for the target collection item is being collected through the voice collection prompt message. Meanwhile, along with the input of the collected content in the voice form, the terminal can also present the process of style change of voice collection prompt information, such as the voice collection prompt information in the waveform form, and then present the process of waveform change along with the input of the collected content in the voice form, and the frequency of waveform change can be matched with the voice frequency of the voice.
As an example, referring to fig. 9, fig. 9 is a schematic representation of a voice input interface provided in an embodiment of the present application. Here, the terminal creates a target collection item of a type associated with the target operation, i.e., a collection item of question-and-answer type "please input questions", in the information collection table; at this time, the voice input function for the target collection item is turned on, and a voice input interface corresponding to the target collection item is presented, and in the voice input interface, voice collection prompt information "in voice recognition … …" corresponding to the target collection item is presented, and the user voice input is waited.
In some embodiments, prior to presenting the initial information collection table, the terminal may implement the association between user operations and collection items by: presenting an operation setting area corresponding to the information collection table and at least two types of collection items including a target collection item; collecting a first image containing first user operation, and presenting the first image through an operation setting area; and after the first user operation is successfully identified, responding to the selection operation for the target collection item, and associating the first user operation with the target collection item as the target operation.
Here, the terminal may present an operation setting area corresponding to the information collection table, and at least two types of collection items including the target collection item. Here, the operation setting area may be presented in a case where an operation creation function for creating a collection item in the information collection table by performing a target operation is turned on. In the case where the operation creation function is turned on, the terminal provides an operation setting area and at least two types of collection items including a target collection item to achieve association of the target operation with the type of the target collection item.
Specifically, the terminal acquires a first image containing a first user operation, and presents the first image through an operation setting area; and after the first user operation is successfully identified, responding to the selection operation for the target collection item, and associating the first user operation as the target operation with the type of the target collection item. For example, when the first user operation is recognized as a 666 gesture operation, the 666 gesture operation is associated with the target collection item of the question type in response to the selection operation for the target collection item of the question type.
In some embodiments, the terminal may present at least two types of collection items including the target collection item by: presenting a drop-down function item corresponding to the collection item type; in response to a triggering operation for the drop-down function item, at least two types of collection items including a target collection item are presented.
In practical application, the terminal may present a drop-down function item corresponding to the collection item type, for example, the drop-down function item is presented below the operation setting area. In response to a triggering operation for the drop-down function item, at least two types of collection items including a target collection item are presented.
As an example, referring to fig. 10, fig. 10 is a schematic diagram of a binding flow of a target operation and a target collection item of a corresponding type provided in an embodiment of the present application. Here, the terminal presents an operation setting area corresponding to the information collection table, and presents a pull-down function item corresponding to the collection item type below the operation setting area, as shown in a diagram in fig. 10;
in response to a triggering operation for the drop-down function item, presenting at least two types of collection items including a target collection item, including a collection item of answer type, a collection item of time question type, a collection item of single choice question type and the like, and simultaneously presenting an acquired image containing a first user operation (namely '666' gesture operation) through an operation setting area, wherein the image is being subjected to operation identification, and a first image identification frame corresponding to the first user operation is also presented in the operation setting area, wherein the first image identification frame is yellow, as shown in a diagram B in fig. 10;
after the first user operation is identified and the first user operation is successfully identified, a second image identification frame corresponding to the first user operation is displayed in the operation setting area, wherein the second image identification frame is blue (the identification is successfully characterized), as shown in a graph C in fig. 10;
In response to a selection operation for the target collection item "collection item of question type", the first user operation is associated with the target collection item as a target operation while presenting a prompt message "association is successful" of successful association, as shown in a D-diagram in fig. 10.
In some embodiments, the terminal may present, in the operation setting area, first operation prompt information corresponding to the first user operation; the first operation prompt information is used for prompting that the first user operation is associated with the target collection item through executing the first user operation.
In practical application, the terminal may further present, in the operation setting area, first operation prompt information corresponding to the first user operation, so as to prompt the user to associate the first user operation with the target collection item by executing the first user operation through the first operation prompt information. As an example, referring to fig. 11A, fig. 11A is a schematic presentation diagram of first operation prompt information provided in an embodiment of the present application. Here, the terminal presents first operation prompt information corresponding to the first user operation in the operation setting area, for example: binding 666 gesture operations and question and answer collection items; binding smile expression operation and time topic collection items. Of course, the first operation prompt information may be text or may be an image, for example, may be a graphic schematic diagram corresponding to a gesture operation of "666" and an expression schematic diagram corresponding to an expression operation of "smile". The first operation prompt information shown in fig. 11A is a graphic schematic diagram corresponding to the gesture operation of 666 and a corresponding text description.
In some embodiments, the terminal may present a second operation prompt when the first user operation is associated with a corresponding type of collection item; the second operation prompting information is used for prompting that the first user operation is associated with a collection item of a corresponding type and guiding the re-execution of other user operations different from the first user operation.
In practical application, when the first user operation is associated with the collection item of the corresponding type, the terminal can present second operation prompt information to prompt the first user operation to be associated with the collection item of the corresponding type based on the second operation prompt information, and guide the re-execution of other user operations different from the first user operation. As an example, referring to fig. 11B, fig. 11B is a schematic presentation diagram of second operation prompt information provided in an embodiment of the present application. Here, after the terminal successfully identifies the first user operation, and determines that the first user operation is associated with a collection item of a corresponding type, at this time, in the operation setting area, second operation prompt information is presented, for example: the current "666" gesture operates on the collection item with the bound question-answer type, please perform other operations. In actual implementation, a schematic diagram of other operations than the first user operation and not bound with the collection items of the corresponding type may also be presented to prompt the user to bind the collection items by performing the operations illustrated by the schematic diagram.
In some embodiments, the terminal may receive the collection content corresponding to the input target collection item by: receiving collection content corresponding to a target collection item in an input voice form;
accordingly, the terminal may generate the target information collection table based on the collection contents by: performing silence detection on the collected content in the voice form to obtain a silence part in the collected content, and removing the silence part in the collected content to obtain target collected content; segmenting the target collection content to obtain a plurality of sound frames contained in the target collection content, and extracting the characteristics of each sound frame to obtain corresponding sound characteristics; respectively inputting sound characteristics corresponding to each sound frame into a machine learning model for voice recognition to obtain text content corresponding to each sound frame; and generating a target information collection table based on the text content corresponding to each sound frame.
Here, when the collected content received by the terminal is a collected content in a voice form, silence detection may be performed on the collected content in the voice form to obtain a silence portion in the collected content, thereby removing the silence portion in the collected content to obtain a target collected content. Then, the target collection content is segmented, specifically, the target collection content can be segmented frame by frame, so as to obtain a plurality of sound frames contained in the target collection content. And then extracting the characteristics of each sound frame to obtain corresponding sound characteristics. And finally, inputting the sound characteristics corresponding to each sound frame into a machine learning model for voice recognition to obtain text contents corresponding to each sound frame, thereby generating a target information collection table based on the text contents corresponding to each sound frame. Here, the machine learning model may be constructed based on a convolutional neural network.
By presenting the operation identification area corresponding to the initial information collection table, the application of the embodiment of the invention presents the image containing the user operation through the operation identification area, when the user operation presented by the operation identification area is the target operation, the target collection item of the type associated with the target operation is created in the information collection table in response to the target operation, and thus when the collection content corresponding to the input target collection item is received, the target information collection table is generated based on the collection content. Therefore, the user can realize the creation of the collection items in the information collection table by executing the target operation related to the types of the target collection items, so that the creation mode of the information collection table is enriched, the operation is simple, the creation efficiency of the information collection table is improved, and the hardware processing resources are saved.
An exemplary application of the embodiments of the present application in a practical application scenario will be described below. First, explanation is made on nouns provided in the embodiments of the present application, including:
1) An information collection table that can provide information collection and the ability to convert the collection results into a table;
2) The title, the collection item in the information collection table, the user needs to fill in or select to realize information collection;
3) The question type in the information collection table can be divided into blank filling, single selection, multiple selection, pictures, time, date and the like;
4) Options, including single-choice, multiple-choice, drop-down question types, etc., require the user to make a selection.
In the related art, an information collection table is created in a mode of matching a key mouse through traditional modes such as manual input and dragging, and various compatibility problems often exist in an interaction mode of matching the traditional key mouse, such as the problem that an iOS end is compatible with an input cursor height, pages are not reset after an iOS soft keyboard is retracted, a keyboard popped up Android covers a text box and the like, reproduction, investigation and restoration are difficult, and interaction experience of a user is seriously affected.
Based on this, the embodiment of the application provides a processing method of an information collection table, so as to at least solve the above-mentioned problems. According to the processing method of the information collection table, the information collection table can be created by combining gestures, expressions and voices, so that the interaction experience of a user can be greatly improved, the conversion rate of the user is increased, and the processing method is a more novel and intelligent mode. Specifically, the user can add or delete questions, change questions, set options, issue and share to other users through custom gestures, expressions, voices and the like. In practical application, the client provides a gesture and expression recognition interface, the user can bind a specific gesture or expression to a certain built-in function of the information collection table (such as creating a question and answer), and then, in combination with voice recognition, the user can enter customized characters for each created question or option, so that the purpose of creating the information collection table is achieved.
Next, a method for processing the information collection table provided in the embodiment of the present application will be described first from the product side, including:
first, in the embodiment of the present application, a powerful voice wake-up function is provided, for example, a user may instruct "XX document, please create an empty form" to quickly create a new blank information collection table through voice. The created blank information collection table is in a state to be edited (the state is also a state that a voice command is to be received, i.e. a state that other valid voice commands are to be input). The editing page in the state to be edited is shown as a diagram B in fig. 4 or a diagram B in fig. 5, wherein an intelligent expression and gesture input function corresponding to the information collection table is also provided, and a user can activate the intelligent expression and gesture input function through a built-in voice command or a manual clicking mode.
Secondly, the user can wake up the provided intelligent input expression and gesture function through a built-in wake-up instruction XX document by using a voice wake-up function, or wake up the provided intelligent input expression and gesture function by manually clicking an intelligent input expression and gesture function entry. The functions of the two modes are identical, and a right image capturing window (namely the operation setting area) is presented in the created information collection table, and when a gesture or a facial expression is detected through the operation setting area, the operation setting area appears in a yellow rectangular frame to indicate that the current gesture or expression is being interpreted, and the operation setting area is shown as a B diagram in fig. 10.
Thirdly, after the gesture or expression is successfully interpreted, the yellow rectangular frame in the operation setting area is changed into a blue rectangular frame, and meanwhile, prompt information of successful input and binding function is displayed, as shown in a graph C in FIG. 10. At the moment, the user can bind the new question answering function through the built-in voice command XX document, so as to realize the automatic binding of the recognized gesture or expression operation and the new question answering function; or, after a function is selected, a "bind" function button is clicked, that is, the currently recognized gesture or expression is bound to the selected function, for example, a "666" gesture operation may be bound to a built-in function of "new question and answer", see fig. 10.
Fourth, the user can start to make the information collection table by using the gesture and expression successfully bound. When a related gesture or facial expression is detected, the prompt message of 'successful recognition and XX collection project' is being created, and a built-in function bound with the gesture or facial expression is executed, for example, the '666' gesture binds with the function of 'newly creating a question and answer', and when a '666' gesture operation is detected, a question and answer is automatically created. When the execution of the built-in function (such as newly built questions) is completed, the function of voice recognition is automatically awakened, the user is waited to input the collected content by voice input aiming at the created collected items, and prompt information in voice recognition is presented on the area waiting for input, see fig. 9.
Next, a description will be given continuously from the technical side of a method for processing an information collection table provided in an embodiment of the present application, referring to fig. 12, fig. 12 is a flow chart of a method for processing an information collection table provided in an embodiment of the present application, including:
step 201: the user calls or clicks the intelligent input function entrance;
step 202: the user inputs an expression or gesture;
step 203: collecting the list-reading expression or gesture;
step 204: the user selects a collection item of a target type and binds the collection item with the read expression or gesture;
here, the user may perform multiple bindings, i.e., operations corresponding to creating a functional binding for each collection item.
Step 205: the user inputs the expression or gesture again;
step 206: the collection table interprets and creates collection items of the type associated with the input expression or gesture;
step 207: the collection table automatically wakes up voice recognition and edits collection content corresponding to the collection item.
Specifically, the processing method of the information collection table provided in the embodiment of the present application mainly includes two parts in a technical aspect: image recognition and voice recognition. The image recognition technology is used for processing expression and gesture information of the user; voice recognition is used to process the user's voice information, and image recognition and voice recognition are described further below, respectively.
First, image recognition: "interpreting" user expressions and gestures. The image recognition main user related to the embodiment of the application recognizes the expression and the gesture of the user, and the refinement point is the facial and hand information of the user. Here, first, a skin portion in an image including a user operation is extracted using skin detection as a preprocessing of input data of the image recognition model thereafter; and then taking the processed image data as the input of an image recognition model, performing multi-classification prediction through a convolutional neural network to obtain the types of expressions or gestures, and finally creating corresponding types of collection items according to the types.
1) Skin detection: preprocessing of "interpretation" of user expressions and gestures.
Skin detection is a picture detection method for detecting which pixels in a picture belong to human skin, a skin model is usually used in skin detection, and the skin model is used for researching the picture under a plurality of color spaces in detection. The color space is also called a color model, and the main function is to describe colors under a certain standard, and the RGB color control is a well-known color space, and besides, the color space of HSV, YCbCr and the like. A number of experiments have demonstrated that YCbCr color space has excellent clustering in skin detection, see fig. 13, fig. 13 is a schematic diagram of RGB color space for one color provided in the embodiments of the present application.
The skin model is also a mathematical model, and mainly shows which pixels in the picture belong to the skin of the person in an algebraic manner. Currently, the most widely used skin models in research are gaussian models, elliptic models, bayesian classification, and the like. As shown in fig. 14, fig. 14 is a schematic view of an elliptical model provided in an embodiment of the present application. Here, an ellipse model is used as the skin model in the embodiment of the present application, and this ellipse model refers to a case where the distribution of pixels of the skin of the person is concentrated in a distribution similar to an ellipse when the pixels of the skin are mapped onto the CbCr plane,
finally, when judging whether a pixel point represents the skin of the person, converting the point to the CbCr plane to judge whether the point falls in the ellipse, if so, representing that the point is in the skin area of the person, and if not, not representing that the point is not in the skin area of the person. Then, the skin pixel points and the non-skin pixel points are respectively set to be white and black, as shown in fig. 15, and fig. 15 is a schematic distribution diagram of the skin pixel points and the non-skin pixel points provided in the embodiment of the present application.
2) Convolutional neural network and recurrent neural network: and classifying the preprocessed data.
The Convolutional Neural Network (CNN) includes an input layer, an hidden layer and an output layer, as shown in fig. 16, and fig. 16 is a schematic structural diagram of the convolutional neural network provided in an embodiment of the present application. The hidden layer is divided into a convolution layer, a pooling layer, a full connection layer and the like. Among these, the most critical is the convolutional layer, which has the characteristic of local connection unlike the common neural network. Local connections are also called sparse connections, meaning that each neuron of a convolutional layer in a convolutional neural network is only connected with local but not all neurons in the previous layer, so as to achieve the effects of reducing parameters and improving operation efficiency. All units are not required to be connected with all units in the upper layer, only a part of units in the upper layer are required to be connected, and then the whole characteristic is obtained by combining a plurality of local information in the lower layer, and the connection structure of the convolutional neural network is shown in fig. 16. In practical applications, convolutional neural networks typically employ two-dimensional convolutional operations, where the input of data is arranged in a rectangular fashion, rather than in a longitudinal fashion as in conventional neural networks.
In addition to this. In deep learning, some data is serialized, and a Recurrent Neural Network (RNN) can be used to address such data. The sequence data is that the order of data has an effect on data, and the front output has a certain relation with the rear output. Due to the specificity of the circulating neural network, hidden layer neurons in the circulating neural network have self circulation, so that the relation between outputs at different moments can be obtained, and the hidden layer neurons can be used for solving the problem that some traditional neural networks are not good. Fig. 17 is a schematic diagram of character expression recognition provided in an embodiment of the present application. Here, preprocessing (such as data enhancement processing, normalization processing, etc.) is performed on the collected image including the character expression, so as to obtain a preprocessed image, the preprocessed image is input into a neural network model which is completed by training, the neural network model includes a convolution layer, a downsampling layer, a full connection layer, etc., the expression classification prediction is performed on the image through the neural network model which is completed by training, and the category corresponding to the character expression, such as anger, happiness, sadness, etc., is output.
As such, the present application may recognize 0 to 9 and some other common gestures in terms of gesture recognition; in terms of expression recognition, in the embodiment of the present application, the overall flow of "interpretation" of the user expression and gesture is shown in fig. 18 according to the classification of five sense organs such as opening and closing of eyes and opening and closing of mouth, and the classification of moods such as moods, where fig. 18 is a schematic diagram of recognition of the user gesture provided in the embodiment of the present application, firstly, preprocessing data of skin detection on a gesture image, and then, performing classification prediction based on deep learning (i.e. through a neural network model) on the preprocessed image to obtain a classification result, i.e. the predicted gesture operation is gesture "5".
Second, speech recognition: the user intelligently enters the collection content included in the collection items of the information collection table, such as title information (e.g., titles, etc.).
Here, before starting speech recognition, the speech data is first subjected to preprocessing including silence removal or the like for ending the speech segment. After preprocessing is completed, the sound information needs to be segmented, as shown in fig. 19, fig. 19 is a schematic diagram of the sound information segmentation provided in the embodiment of the present application, where each segment is called a sound frame, and it should be noted that there is an overlapping portion between each frame, that is, a 10ms overlapping portion between the sound frame (1) and the sound frame (2) shown in fig. 19. After the segmented sound frames are acquired, the sound frames need to be converted into more descriptive information, in this embodiment, the MFCC features of the sound are extracted and each sound frame is converted into a matrix. So far, the voice recognition result can be obtained by inputting the sound characteristics with strong descriptive ability into an acoustic model for classification. Thereby creating an information collection table based on the collection contents of the speech-recognized collection items to perform information collection based on the created information collection table.
By applying the embodiment of the application, the information collection table can be created by combining gestures, expressions and voices, so that the interactive experience of the user can be greatly enhanced, the user liveness of the information collection table is improved, and the user conversion rate is increased.
Continuing with the description below of an exemplary structure of the processing device 555 implemented as a software module for the information collection table provided in the embodiments of the present application, in some embodiments, as shown in fig. 2, the software module stored in the processing device 555 for the information collection table of the memory 550 may include:
a presenting module 5551, configured to present an initial information collection table and an operation identification area corresponding to the information collection table;
an acquisition module 5552, configured to acquire an image including a user operation, and present the image through the operation recognition area;
a creation module 5553, configured to, when a user operation presented by the operation identification area is a target operation, create, in the information collection table, a target collection item of a type associated with the target operation in response to the target operation;
a generating module 5554, configured to receive input collection content corresponding to the target collection item, and generate a target information collection table based on the collection content;
The target information collection table is used for enabling at least two users with operation authorities to edit corresponding information based on the collected content.
In some embodiments, the creating module 5553 is further configured to receive a voice creating instruction for the information collection table triggered by a voice manner;
and responding to the voice creation instruction, and creating the initial information collection table.
In some embodiments, the creating module 5553 is further configured to present a collection table creating interface, and present a collection table creating function item in the collection table creating interface;
the initial information collection table is created in response to a trigger operation for the collection table creation function.
In some embodiments, the creating module 5553 is further configured to present a collection table creating interface, and present a creating template of the blank information collection table in the collection table creating interface;
in response to a template selection operation for the creation template, a blank information collection table is created as the initial information collection table.
In some embodiments, the presenting module 5551 is further configured to present, in the information collection table, a default collection item carrying default collection content, and an edit function item corresponding to the default collection item;
When a trigger operation for the editing function item is received, an editing interface for editing the default collection item and corresponding default collection content is presented;
the generating module 5554 is further configured to generate, based on the collection content, a target information collection table in combination with a default collection item edited based on the editing interface and a corresponding default collection content.
In some embodiments, the presentation module 5551 is further configured to receive an item creation instruction for the information collection table;
and responding to the project creation instruction, and presenting an operation identification area corresponding to the information collection table.
In some embodiments, the presenting module 5551 is further configured to present an item creation function in the information collection table, and receive an item creation instruction for the information collection table in response to a trigger operation for the item creation function;
or receiving a voice item creation instruction which is triggered based on a voice form and aims at the information collection table.
In some embodiments, the apparatus further comprises:
the starting module is used for receiving a starting instruction of an operation creation function aiming at the information collection table;
In response to the open instruction, opening an operation creation function that creates a collection item in the information collection table by executing a target operation;
the presenting module 5551 is further configured to present an operation identification area corresponding to the information collection table when the operation creation function is turned on.
In some embodiments, the opening module is further configured to receive an opening instruction of an operation creation function for the information collection table in response to a trigger operation of the operation creation function switch for presentation;
or receiving a voice starting instruction which is triggered based on a voice form and aims at the operation creation function of the information collection table.
In some embodiments, the presenting module 5551 is further configured to present, in the operation identification area, guiding information corresponding to the target operation;
wherein the guidance information is used for guiding the creation of the target collection item in the information collection table by executing the target operation.
In some embodiments, the acquisition module 5552 is further configured to present, in the operation recognition area, a first image recognition frame corresponding to the image in a process of performing user operation recognition on the image;
When the user operation identification is completed on the image, presenting a second image identification frame corresponding to the image in the operation identification area;
the positions of the first image recognition frame and the second image recognition frame correspond to a target area containing user operation in the image; the first image recognition frame and the second image recognition frame are different in display style.
In some embodiments, the presenting module 5551 is further configured to present creation prompt information in a process of creating the target collection item of the type associated with the target operation;
and the creation prompt information is used for prompting that the target collection item of the type associated with the target operation is being created.
In some embodiments, the presentation module 5551 is further configured to turn on a voice input function for the target collection item, and
presenting a voice input interface corresponding to the target collection item;
the generating module 5554 is further configured to receive collection content corresponding to the target collection item in voice form, which is input based on the voice input interface.
In some embodiments, the presenting module 5551 is further configured to present, in the voice input interface, voice collection prompt information corresponding to the target collection item;
The voice collection prompt message is used for prompting that collection contents aiming at the target collection project are being collected;
and presenting the process of changing the style of the voice acquisition prompt information along with the input of the voice form collection content.
In some embodiments, the apparatus further comprises:
the association module is used for presenting an operation setting area corresponding to the information collection table and at least two types of collection items including the target collection item;
collecting a first image containing a first user operation, and presenting the first image through the operation setting area;
and identifying the first user operation, and after the first user operation is successfully identified, responding to the selection operation for the target collection item, and associating the first user operation with the target collection item as the target operation.
In some embodiments, the association module is further configured to present a drop-down function item corresponding to the collection item type;
in response to a trigger operation for the drop-down function item, at least two types of collection items including the target collection item are presented.
In some embodiments, the association module is further configured to present, in an operation setting area, first operation prompt information corresponding to the first user operation;
The first operation prompt information is used for prompting that the first user operation is associated with the target collection item through executing the first user operation.
In some embodiments, the association module is further configured to present a second operation prompt when the first user operation is associated with a collection item of a corresponding type;
the second operation prompt information is used for prompting that the first user operation is associated with a collection item of a corresponding type and guiding to re-execute other user operations different from the first user operation.
In some embodiments, the acquisition module 5552 is further configured to perform skin detection processing on the image, and determine a partial image that is included in the image and corresponds to a user operation;
and inputting the partial images into a machine learning model for operation prediction to obtain a prediction result of whether the user operation is the target operation.
In some embodiments, the generating module 5554 is further configured to receive collection content corresponding to the target collection item in the form of input voice;
the generating module 5554 is further configured to perform silence detection on the collected content in a voice form, obtain a silence portion in the collected content, and remove the silence portion in the collected content to obtain a target collected content;
Segmenting the target collection content to obtain a plurality of sound frames contained in the target collection content, and extracting the characteristics of each sound frame to obtain corresponding sound characteristics;
respectively inputting sound characteristics corresponding to each sound frame into a machine learning model for voice recognition to obtain text content corresponding to each sound frame;
and generating a target information collection table based on the text content corresponding to each sound frame.
By presenting the operation identification area corresponding to the initial information collection table, the application of the embodiment of the invention presents the image containing the user operation through the operation identification area, when the user operation presented by the operation identification area is the target operation, the target collection item of the type associated with the target operation is created in the information collection table in response to the target operation, and thus when the collection content corresponding to the input target collection item is received, the target information collection table is generated based on the collection content. Therefore, the user can realize the creation of the collection items in the information collection table by executing the target operation related to the types of the target collection items, so that the creation mode of the information collection table is enriched, the operation is simple, the creation efficiency of the information collection table is improved, and the hardware processing resources are saved.
The embodiment of the application also provides electronic equipment, which comprises:
a memory for storing executable instructions;
and the processor is used for realizing the processing method of the information collection table provided by the embodiment of the application when executing the executable instructions stored in the memory.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the processing method of the information collection table provided in the embodiment of the application.
The embodiment of the application also provides a computer readable storage medium which stores executable instructions, wherein when the executable instructions are executed by a processor, the processing method of the information collection table provided by the embodiment of the application is realized.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EP ROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, the executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, for example, in one or more scripts in a hypertext markup language (HTML, hyper Text Markup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices located at one site or, alternatively, distributed across multiple sites and interconnected by a communication network.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modifications, equivalent substitutions, improvements, etc. that are within the spirit and scope of the present application are intended to be included within the scope of the present application.

Claims (20)

1. A method of processing an information collection table, the method comprising:
presenting an operation setting area corresponding to the information collection table and at least two types of collection items including a target collection item;
collecting a first image containing a first user operation, and presenting the first image through the operation setting area;
identifying the first user operation, and after the first user operation is successfully identified, responding to the selection operation for the target collection item, and associating the first user operation with the target collection item as a target operation;
presenting an initial information collection table and an operation identification area corresponding to the information collection table;
collecting an image containing user operation, and presenting the image through the operation identification area;
when the user operation presented by the operation identification area is a target operation, responding to the target operation, and creating the target collection item of the type associated with the target operation in the information collection table;
Receiving input collection content corresponding to the target collection item, and generating a target information collection table based on the collection content;
the target information collection table is used for enabling at least two users with operation authorities to edit corresponding information based on the collected content.
2. The method of claim 1, wherein prior to the presenting the initial information collection table, the method further comprises:
receiving a voice creation instruction which is triggered in a voice mode and aims at an information collection table;
and responding to the voice creation instruction, and creating the initial information collection table.
3. The method of claim 1, wherein prior to the presenting the initial information collection table, the method further comprises:
presenting a collection table creation interface, and presenting a blank information collection table creation template in the collection table creation interface;
in response to a template selection operation for the creation template, a blank information collection table is created as the initial information collection table.
4. The method of claim 1, wherein after the presenting the initial information collection table, the method further comprises:
Presenting default collection items carrying default collection contents and editing function items corresponding to the default collection items in the information collection table;
when a trigger operation for the editing function item is received, an editing interface for editing the default collection item and corresponding default collection content is presented;
the generating a target information collection table based on the collection content includes:
and generating a target information collection table based on the collection content and combined with the default collection items edited based on the editing interface and the corresponding default collection content.
5. The method of claim 1, wherein presenting the operation identification area corresponding to the information collection table comprises:
receiving an item creation instruction for the information collection table;
and responding to the project creation instruction, and presenting an operation identification area corresponding to the information collection table.
6. The method of claim 1, wherein the method further comprises:
receiving an opening instruction of an operation creation function aiming at the information collection table;
in response to the open instruction, opening an operation creation function that creates a collection item in the information collection table by executing a target operation;
The presenting the operation identification area corresponding to the information collection table comprises the following steps:
and when the operation creation function is started, presenting an operation identification area corresponding to the information collection table.
7. The method of claim 6, wherein the receiving an open instruction for an operation creation function of the information collection table comprises:
receiving an opening instruction of an operation creation function of the information collection table in response to a triggering operation of an operation creation function switch for presentation;
or receiving a voice starting instruction which is triggered based on a voice form and aims at the operation creation function of the information collection table.
8. The method of claim 1, wherein after presenting the operation identification area corresponding to the information collection table, the method further comprises:
presenting guide information corresponding to the target operation in the operation identification area;
wherein the guidance information is used for guiding the creation of the target collection item in the information collection table by executing the target operation.
9. The method of claim 1, wherein after the image is presented by the operation recognition area, the method further comprises:
In the process of carrying out user operation identification on the image, presenting a first image identification frame corresponding to the image in the operation identification area;
when the user operation identification is completed on the image, presenting a second image identification frame corresponding to the image in the operation identification area;
the positions of the first image recognition frame and the second image recognition frame correspond to a target area containing user operation in the image; the first image recognition frame and the second image recognition frame are different in display style.
10. The method of claim 1, wherein the method further comprises:
presenting creation prompt information in the process of creating the target collection item of the type associated with the target operation;
and the creation prompt information is used for prompting that the target collection item of the type associated with the target operation is being created.
11. The method of claim 1, wherein after the creating the target collection item of the type associated with the target operation in the information collection table, the method further comprises:
starting a voice input function aiming at the target collection item, and
Presenting a voice input interface corresponding to the target collection item;
the collecting content corresponding to the target collecting item received and input comprises the following steps:
and receiving the collection content corresponding to the target collection item in the voice form, which is input based on the voice input interface.
12. The method of claim 11, wherein the method further comprises:
presenting voice acquisition prompt information corresponding to the target collection item in the voice input interface;
the voice collection prompt message is used for prompting that collection contents aiming at the target collection project are being collected;
and presenting the process of changing the style of the voice acquisition prompt information along with the input of the voice form collection content.
13. The method of claim 1, wherein the method further comprises:
in the operation setting area, presenting first operation prompt information corresponding to the first user operation;
the first operation prompt information is used for prompting that the first user operation is associated with the target collection item through executing the first user operation.
14. The method of claim 1, wherein the method further comprises:
When the first user operation is associated with a collection item of a corresponding type, second operation prompt information is presented;
the second operation prompt information is used for prompting that the first user operation is associated with a collection item of a corresponding type and guiding to re-execute other user operations different from the first user operation.
15. The method of claim 1, wherein the method further comprises:
performing skin detection processing on the image, and determining partial images which are contained in the image and correspond to user operation;
and inputting the partial images into a machine learning model for operation prediction to obtain a prediction result of whether the user operation is the target operation.
16. The method of claim 1, wherein the receiving the collection content corresponding to the target collection item of input comprises:
receiving collection content corresponding to the target collection item in the form of input voice;
the generating a target information collection table based on the collection content includes:
performing silence detection on the collected content in a voice form to obtain a silence part in the collected content, and removing the silence part in the collected content to obtain a target collected content;
Segmenting the target collection content to obtain a plurality of sound frames contained in the target collection content, and extracting the characteristics of each sound frame to obtain corresponding sound characteristics;
respectively inputting sound characteristics corresponding to each sound frame into a machine learning model for voice recognition to obtain text content corresponding to each sound frame;
and generating a target information collection table based on the text content corresponding to each sound frame.
17. An apparatus for processing an information collection table, the apparatus comprising:
the association module is used for presenting an operation setting area corresponding to the information collection table and at least two types of collection items including target collection items; collecting a first image containing a first user operation, and presenting the first image through the operation setting area; identifying the first user operation, and after the first user operation is successfully identified, responding to the selection operation for the target collection item, and associating the first user operation with the target collection item as a target operation;
the presentation module is used for presenting the initial information collection table and an operation identification area corresponding to the information collection table;
The acquisition module is used for acquiring an image containing user operation and presenting the image through the operation identification area;
a creation module, configured to create, when a user operation presented by the operation identification area is a target operation, a target collection item of a type associated with the target operation in the information collection table in response to the target operation;
the generation module is used for receiving the input collection content corresponding to the target collection item and generating a target information collection table based on the collection content;
the target information collection table is used for enabling at least two users with operation authorities to edit corresponding information based on the collected content.
18. An electronic device, the electronic device comprising:
a memory for storing executable instructions;
a processor for implementing the method of processing an information collection table according to any one of claims 1 to 16 when executing executable instructions stored in said memory.
19. A computer-readable storage medium storing executable instructions that, when executed, implement the method of processing an information collection table according to any one of claims 1 to 16.
20. A computer program product comprising executable instructions which when executed by a processor implement the method of processing an information collection table according to any one of claims 1 to 16.
CN202110790684.XA 2021-07-13 2021-07-13 Information collection table processing method and device, electronic equipment and storage medium Active CN113485619B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110790684.XA CN113485619B (en) 2021-07-13 2021-07-13 Information collection table processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110790684.XA CN113485619B (en) 2021-07-13 2021-07-13 Information collection table processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113485619A CN113485619A (en) 2021-10-08
CN113485619B true CN113485619B (en) 2024-03-19

Family

ID=77938379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110790684.XA Active CN113485619B (en) 2021-07-13 2021-07-13 Information collection table processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113485619B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012234106A (en) * 2011-05-09 2012-11-29 Manabing Kk Automatic question creating device and creating method
CN103685908A (en) * 2012-09-12 2014-03-26 联发科技股份有限公司 Method for controlling execution of camera related functions
CN108563384A (en) * 2018-04-17 2018-09-21 泰康保险集团股份有限公司 A kind of exchange method and relevant device based on questionnaire
CN108805035A (en) * 2018-05-22 2018-11-13 深圳市鹰硕技术有限公司 Interactive teaching and learning method based on gesture identification and device
CN109300065A (en) * 2018-08-15 2019-02-01 北京博赛在线网络科技有限公司 A kind of online exercises generation method and device
CN109522799A (en) * 2018-10-16 2019-03-26 深圳壹账通智能科技有限公司 Information cuing method, device, computer equipment and storage medium
US10474745B1 (en) * 2016-04-27 2019-11-12 Google Llc Systems and methods for a knowledge-based form creation platform
CN110660275A (en) * 2019-09-18 2020-01-07 武汉天喻教育科技有限公司 Teacher-student classroom instant interaction system and method based on video analysis
CN110826302A (en) * 2019-11-07 2020-02-21 网之易信息技术(北京)有限公司 Questionnaire creating method, device, medium and electronic equipment
CN110941992A (en) * 2019-10-29 2020-03-31 平安科技(深圳)有限公司 Smile expression detection method and device, computer equipment and storage medium
CN112286411A (en) * 2020-09-30 2021-01-29 北京大米科技有限公司 Display mode control method and device, storage medium and electronic equipment
CN112384926A (en) * 2019-04-30 2021-02-19 微软技术许可有限责任公司 Document autocompletion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110145010A1 (en) * 2009-12-13 2011-06-16 Soft Computer Consultants, Inc. Dynamic user-definable template for group test

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012234106A (en) * 2011-05-09 2012-11-29 Manabing Kk Automatic question creating device and creating method
CN103685908A (en) * 2012-09-12 2014-03-26 联发科技股份有限公司 Method for controlling execution of camera related functions
US10474745B1 (en) * 2016-04-27 2019-11-12 Google Llc Systems and methods for a knowledge-based form creation platform
CN108563384A (en) * 2018-04-17 2018-09-21 泰康保险集团股份有限公司 A kind of exchange method and relevant device based on questionnaire
CN108805035A (en) * 2018-05-22 2018-11-13 深圳市鹰硕技术有限公司 Interactive teaching and learning method based on gesture identification and device
CN109300065A (en) * 2018-08-15 2019-02-01 北京博赛在线网络科技有限公司 A kind of online exercises generation method and device
CN109522799A (en) * 2018-10-16 2019-03-26 深圳壹账通智能科技有限公司 Information cuing method, device, computer equipment and storage medium
CN112384926A (en) * 2019-04-30 2021-02-19 微软技术许可有限责任公司 Document autocompletion
CN110660275A (en) * 2019-09-18 2020-01-07 武汉天喻教育科技有限公司 Teacher-student classroom instant interaction system and method based on video analysis
CN110941992A (en) * 2019-10-29 2020-03-31 平安科技(深圳)有限公司 Smile expression detection method and device, computer equipment and storage medium
CN110826302A (en) * 2019-11-07 2020-02-21 网之易信息技术(北京)有限公司 Questionnaire creating method, device, medium and electronic equipment
CN112286411A (en) * 2020-09-30 2021-01-29 北京大米科技有限公司 Display mode control method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN113485619A (en) 2021-10-08

Similar Documents

Publication Publication Date Title
US20230316643A1 (en) Virtual role-based multimodal interaction method, apparatus and system, storage medium, and terminal
US20220394344A1 (en) Method, apparatus, electronic device, and storage medium for processing live streaming information
CN110598576B (en) Sign language interaction method, device and computer medium
US9754585B2 (en) Crowdsourced, grounded language for intent modeling in conversational interfaces
CN108942919B (en) Interaction method and system based on virtual human
CN114401438B (en) Video generation method and device for virtual digital person, storage medium and terminal
CN109086860B (en) Interaction method and system based on virtual human
TW201937344A (en) Smart robot and man-machine interaction method
CN109086276B (en) Data translation method, device, terminal and storage medium
CN108470188B (en) Interaction method based on image analysis and electronic equipment
US11455510B2 (en) Virtual-life-based human-machine interaction methods, apparatuses, and electronic devices
WO2021218432A1 (en) Method and apparatus for interpreting picture book, electronic device and smart robot
CN110825164A (en) Interaction method and system based on wearable intelligent equipment special for children
CN110767005A (en) Data processing method and system based on intelligent equipment special for children
CN111723653A (en) Drawing book reading method and device based on artificial intelligence
CN114708443A (en) Screenshot processing method and device, electronic equipment and computer readable medium
CN111488147A (en) Intelligent layout method and device
CN110992958B (en) Content recording method, content recording apparatus, electronic device, and storage medium
CN113485619B (en) Information collection table processing method and device, electronic equipment and storage medium
CN108628454B (en) Visual interaction method and system based on virtual human
CN113963306B (en) Courseware title making method and device based on artificial intelligence
CN114529635A (en) Image generation method, device, storage medium and equipment
CN114007145A (en) Subtitle display method and display equipment
CN111428569A (en) Visual identification method and device for picture book or teaching material based on artificial intelligence
CN117149965A (en) Dialogue processing method, dialogue processing device, computer equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40053617

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant