CN110430466B - Audio and video acquisition method and device based on AI identification - Google Patents

Audio and video acquisition method and device based on AI identification Download PDF

Info

Publication number
CN110430466B
CN110430466B CN201910744211.9A CN201910744211A CN110430466B CN 110430466 B CN110430466 B CN 110430466B CN 201910744211 A CN201910744211 A CN 201910744211A CN 110430466 B CN110430466 B CN 110430466B
Authority
CN
China
Prior art keywords
audio
user
video
information
video information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910744211.9A
Other languages
Chinese (zh)
Other versions
CN110430466A (en
Inventor
陈剑峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910744211.9A priority Critical patent/CN110430466B/en
Publication of CN110430466A publication Critical patent/CN110430466A/en
Application granted granted Critical
Publication of CN110430466B publication Critical patent/CN110430466B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/454Content or additional data filtering, e.g. blocking advertisements
    • H04N21/4545Input to filtering algorithms, e.g. filtering a region of the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8352Generation of protective data, e.g. certificates involving content or source identification data, e.g. Unique Material Identifier [UMID]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

The invention discloses an audio and video acquisition method and device based on AI identification, wherein the method comprises the following steps: obtaining feature information and identity information of a first user and a second user, wherein the first user and the second user are in a monitoring and monitored relationship; acquiring an audio and video acquisition process tag according to the first user and the second user; acquiring first audio and video information according to the audio and video acquisition process tag, wherein the first audio and video information is the audio and video information which accords with the process tag and is related to the second user; and processing the first audio and video information to obtain a first audio and video fragment, and sending the first audio and video fragment to the first user. By adding the audio/video label information, the audio/video is quickly and efficiently filtered and classified, the audio/video segments of specific personnel are automatically sent, the method is more humanized, and the technical effects of meeting various audio/video requirements of users are achieved.

Description

Audio and video acquisition method and device based on AI identification
Technical Field
The application relates to the technical field of audio and video processing, in particular to an audio and video acquisition method and device based on AI identification.
Background
The use of cameras has become more and more common in everyday life, particularly in indoor and outdoor environments in enclosed areas such as schools, hospitals, communities, nursing homes and the like. However, in the currently used audio and video systems, more functions are real-time monitoring and capturing, on one hand, classification of audio and video information cannot be effectively performed, for example, to find audio and video fragments of a specific person, if parents want to see relevant audios and videos of their children in a classroom, a lot of time is required for review, and manual search is relied on; on the other hand, the audio and video of the monitored person cannot be rapidly acquired and browsed, for example, the situation that a child wants to know that an old person in a nursing home receives a gift.
However, in the process of implementing the technical solution in the embodiment of the present application, the inventor of the present application finds that the above prior art has at least the following technical problems:
the technical problems that audio and video cannot be classified and processed and the audio and video of a specific user cannot be obtained exist in the prior art.
Disclosure of Invention
The embodiment of the application provides an audio and video acquisition method and device based on AI identification, so as to solve the technical problems that audio and video cannot be classified and processed and the audio and video of a specific user cannot be acquired in the prior art. By adding the audio/video label information, the audio/video is quickly and efficiently filtered and classified, the audio/video segments of specific personnel are automatically sent, the method is more humanized, and the technical effects of meeting various audio/video requirements of users are achieved.
In order to solve the above problem, in a first aspect, an embodiment of the present application provides an audio and video acquisition method based on AI identification, where the method includes: obtaining feature information and identity information of a first user and a second user, wherein the first user and the second user are in a monitoring and monitored relationship; acquiring an audio and video acquisition process tag according to the first user and the second user; acquiring first audio and video information according to the audio and video acquisition process tag, wherein the first audio and video information is the audio and video information which accords with the process tag and is related to the second user; and processing the first audio and video information to obtain a first audio and video fragment, and sending the first audio and video fragment to the first user.
Preferably, the triggering mode of the audio/video acquisition process tag includes one or more of a preset condition triggering mode, a field personnel triggering mode and a monitoring personnel triggering mode.
Preferably, after obtaining the feature information and the identity information of the first user and the second user, the method includes: judging whether the first user and the second user accord with a monitoring and monitored relation or not according to the characteristic information and the identity information of the first user and the second user; if the first user and the second user accord with the monitoring and monitored relation, storing the relation between the first user and the second user into a database to obtain an audio/video acquisition process label; and if the first user and the second user do not accord with the monitoring and monitored relation, storing the relation between the first user and the second user into a blacklist, and forbidding obtaining of an audio/video acquisition process label.
Preferably, the first user includes: a first rightist, said first right being a rightist having monitoring and/or monitoring rights for said second user; a second rightist, the second right being a rightist authorized to monitor and/or monitor the second user with the first rightist.
Preferably, the acquiring first audio/video information according to the audio/video acquisition process tag includes: acquiring a trigger mode of the audio and video acquisition process tag; triggering the process tag to start recording according to the triggering mode of the audio and video acquisition process tag to obtain process tag recording information; and acquiring first audio and video information according to the process tag recording information.
Preferably, the acquiring first audio/video information according to the process tag recording information includes: the process tag recording information comprises a plurality of secondary tags, wherein each secondary tag has an audio/video set with corresponding attributes; and filtering the audio and video according to the selection of the secondary label to obtain the first audio and video information.
Preferably, after acquiring the first audio/video information according to the process tag recording information, the method includes: acquiring second audio and video information according to the process tag recording information; obtaining subscription information of the first user; judging whether the second audio and video information is matched with the subscription information; if the second audio and video information is matched with the subscription information, determining that the second audio and video information is the first audio and video information; and if the second audio and video information is not matched with the subscription information, storing the second audio and video information.
In a second aspect, an embodiment of the present application further provides an AI identification-based audio/video acquisition device, where the device includes:
the system comprises a first obtaining unit, a second obtaining unit and a monitoring and monitoring unit, wherein the first obtaining unit is used for obtaining characteristic information and identity information of a first user and a second user, and the first user and the second user are in a monitoring and monitored relationship;
the second obtaining unit is used for obtaining an audio and video acquisition process label according to the first user and the second user;
the first acquisition unit is used for acquiring first audio and video information according to the audio and video acquisition process tag, wherein the first audio and video information is the audio and video information which accords with the process tag and is related to the second user;
and the third obtaining unit is used for processing the first audio and video information to obtain a first audio and video fragment and sending the first audio and video fragment to the first user.
Preferably, the apparatus further comprises: the triggering mode of the audio and video acquisition process tag comprises one or more of a preset condition triggering mode, a field personnel triggering mode and a monitoring personnel triggering mode.
Preferably, the apparatus further comprises:
the first judging unit is used for judging whether the first user and the second user accord with the monitoring and monitored relation or not according to the characteristic information and the identity information of the first user and the second user;
the first execution unit is used for storing the relationship between the first user and the second user into a database to obtain an audio/video acquisition process label if the first user and the second user accord with the monitoring and monitored relationship;
and the second execution unit is used for storing the relationship between the first user and the second user into a blacklist and forbidding obtaining of an audio/video acquisition process label if the first user and the second user do not accord with the monitoring and monitored relationship.
Preferably, the first user includes:
a first rightist, said first right being a rightist having monitoring and/or monitoring rights for said second user;
a second rightist, the second right being a rightist authorized to monitor and/or monitor the second user with the first rightist.
Preferably, the first collecting unit includes:
the fourth obtaining unit is used for obtaining a trigger mode of the audio and video acquisition process label;
the fifth obtaining unit is used for triggering the process tag to start recording according to the triggering mode of the audio and video acquisition process tag so as to obtain process tag recording information;
and the second acquisition unit is used for acquiring first audio and video information according to the process tag recording information.
Preferably, the second acquisition unit includes:
the first containing unit is used for recording information of the process tags and comprises a plurality of secondary tags, wherein each secondary tag has an audio/video set with corresponding attributes;
and the sixth obtaining unit is used for filtering the audio and video according to the selection of the secondary label to obtain the first audio and video information.
Preferably, the apparatus comprises:
a seventh obtaining unit, configured to obtain second audio/video information according to the process tag recording information;
an eighth obtaining unit, configured to obtain subscription information of the first user;
the second judgment unit is used for judging whether the second audio and video information is matched with the subscription information;
the first determining unit is used for determining that the second audio and video information is the first audio and video information if the second audio and video information is matched with the subscription information;
and the first storage unit is used for storing the second audio and video information if the second audio and video information is not matched with the subscription information.
In a third aspect, an embodiment of the present application further provides an AI identification-based audio/video acquisition device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor executes the program to implement the following steps:
obtaining feature information and identity information of a first user and a second user, wherein the first user and the second user are in a monitoring and monitored relationship; acquiring an audio and video acquisition process tag according to the first user and the second user; acquiring first audio and video information according to the audio and video acquisition process tag, wherein the first audio and video information is the audio and video information which accords with the process tag and is related to the second user; and processing the first audio and video information to obtain a first audio and video fragment, and sending the first audio and video fragment to the first user.
In a fourth aspect, an embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the following steps:
obtaining feature information and identity information of a first user and a second user, wherein the first user and the second user are in a monitoring and monitored relationship; acquiring an audio and video acquisition process tag according to the first user and the second user; acquiring first audio and video information according to the audio and video acquisition process tag, wherein the first audio and video information is the audio and video information which accords with the process tag and is related to the second user; and processing the first audio and video information to obtain a first audio and video fragment, and sending the first audio and video fragment to the first user.
One or more technical solutions in the embodiments of the present application have at least one or more of the following technical effects:
the embodiment of the application provides an audio and video acquisition method and device based on AI identification, and the method comprises the following steps: obtaining feature information and identity information of a first user and a second user, wherein the first user and the second user are in a monitoring and monitored relationship; acquiring an audio and video acquisition process tag according to the first user and the second user; acquiring first audio and video information according to the audio and video acquisition process tag, wherein the first audio and video information is the audio and video information which accords with the process tag and is related to the second user; and processing the first audio and video information to obtain a first audio and video fragment, and sending the first audio and video fragment to the first user. The technical problems that audio and video cannot be classified and processed and the audio and video of a specific user cannot be obtained in the prior art are solved. By adding the audio/video label information, the audio/video is quickly and efficiently filtered and classified, the audio/video segments of specific personnel are automatically sent, the method is more humanized, and the technical effects of meeting various audio/video requirements of users are achieved.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
Fig. 1 is a schematic flow chart of an AI identification-based audio and video acquisition method in an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an audio/video acquisition device based on AI identification in an embodiment of the present invention;
fig. 3 is a schematic structural diagram of another audio and video acquisition device based on AI identification in the embodiment of the present invention;
fig. 4 is a system block diagram of an AI identification based audio and video acquisition method in an embodiment of the present invention.
Description of reference numerals: a first obtaining unit 11, a second obtaining unit 12, a first acquiring unit 13, a third obtaining unit 14, a bus 300, a receiver 301, a processor 302, a transmitter 303, a memory 304, and a bus interface 306.
Detailed Description
The embodiment of the application provides an audio and video acquisition method and device based on AI identification, and solves the technical problems that audio and video cannot be classified and processed and the audio and video of a specific user cannot be acquired in the prior art.
In order to solve the technical problems, the technical scheme provided by the application has the following general idea: the method comprises the steps that characteristic information and identity information of a first user and a second user are obtained, wherein the first user and the second user are in a monitoring and monitored relation; acquiring an audio and video acquisition process tag according to the first user and the second user; acquiring first audio and video information according to the audio and video acquisition process tag, wherein the first audio and video information is the audio and video information which accords with the process tag and is related to the second user; and processing the first audio and video information to obtain a first audio and video fragment, and sending the first audio and video fragment to the first user. By adding the audio/video label information, the audio/video is quickly and efficiently filtered and classified, the audio/video segments of specific personnel are automatically sent, the method is more humanized, and the technical effects of meeting various audio/video requirements of users are achieved.
It should be understood that the word Artificial Intelligence, which is called the intellectual science in AI in english, was originally proposed at the Dartmouth society in 1956, since then researchers developed many theories and principles, and the concept of Artificial Intelligence expanded. Artificial intelligence is a new technical science for studying and developing theories, methods, techniques and application systems for simulating, extending and expanding human intelligence. Artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence, a field of research that includes robotics, language recognition, image recognition, natural language processing, and expert systems, among others. Since the birth of artificial intelligence, theories and technologies become mature day by day, and application fields are expanded continuously, so that science and technology products brought by the artificial intelligence in the future can be assumed to be 'containers' of human intelligence. Artificial intelligence is a simulation of the information process of human consciousness and thinking. Artificial intelligence is not human intelligence, but can think like a human, and can also exceed human intelligence.
The technical solutions of the present application are described in detail below with reference to the drawings and specific embodiments, and it should be understood that the specific features in the embodiments and examples of the present application are detailed descriptions of the technical solutions of the present application, and are not limitations of the technical solutions of the present application, and the technical features in the embodiments and examples of the present application may be combined with each other without conflict.
Example one
Fig. 1 is a schematic flow diagram of an AI identification-based audio and video acquisition method according to an embodiment of the present invention, and the AI identification-based audio and video acquisition method according to the embodiment of the present invention is shown in fig. 1, where the method includes:
step 110: obtaining feature information and identity information of a first user and a second user, wherein the first user and the second user are in a monitoring and monitored relationship;
further, after obtaining the feature information and the identity information of the first user and the second user, the method includes: judging whether the first user and the second user accord with a monitoring and monitored relation or not according to the characteristic information and the identity information of the first user and the second user; if the first user and the second user accord with the monitoring and monitored relation, storing the relation between the first user and the second user into a database to obtain an audio/video acquisition process label; and if the first user and the second user do not accord with the monitoring and monitored relation, storing the relation between the first user and the second user into a blacklist, and forbidding obtaining of an audio/video acquisition process label.
Specifically, firstly, by obtaining feature information and identity information of a first user and a second user, a pairing relationship between the first user and the second user is established by using an AI image recognition technology, and in a colloquial manner, a monitoring and monitored relationship between the first user and the second user is determined by establishing a pairing relationship between the first user and the second user through one or more recognition technologies of face recognition, fingerprint recognition, voice recognition and iris recognition, wherein a monitoring person and a monitored person are not limited to a one-to-one relationship, and may be M-to-N, so as to match situations of multiple parents and multiple children. For example, if the identity of a person is identified by a face recognition technology, the face features of the first user and the second user are collected at the same time and are input into a system database; if the person is identified by the iris recognition technology, the iris characteristics of the eyes of the first user and the second user are collected at the same time and are recorded into a system database. After acquiring corresponding characteristic information and identity information of the first user and the second user, judging whether the first user and the second user accord with a monitoring and monitored relation, if so, storing the relation between the first user and the second user into a database to obtain an audio and video acquisition process label; and if the first user and the second user do not accord with the monitoring and monitored relation, storing the relation between the first user and the second user into a blacklist, and forbidding obtaining of an audio/video acquisition process label. For example, if a child of a parent a is a student B, and each parent stipulates in a school that each parent can only view and browse related audio/video clips of the child, the parent a and the student B conform to the monitoring and monitored relationship, then the monitoring and monitored relationship of the parent a and the student B is established and recorded in a database, but the parent a and the student C do not conform to the monitoring and monitored relationship, then the relationship between the parent a and the student B is recorded in a blacklist, and the parent a is prohibited from obtaining related audio/video information of the student C; if the student B learns English through an online learning mode, wherein the student B can be realized through electronic products such as a computer, a mobile phone and a tablet personal computer, and the parents A and the student B accord with the monitoring and monitored relation, so that the parents A can browse and view related audio and video clips for the student B to learn English online. The technical effect of personnel identity recognition is further realized.
Further, the first user includes: a first rightist, said first right being a rightist having monitoring and/or monitoring rights for said second user; a second rightist, the second right being a rightist authorized to monitor and/or monitor the second user with the first rightist.
Specifically, the first right person and the second right person included in the first user are, in short, legally specified guardians of the second user and/or guardian-specified righters of the second user. For example, if the guardian of the student B is the parent a, the parent may not leave the house because of the busy work, and may designate the grandpa or the teacher of the student B as the second authorized person of the student B, where the second authorized person is the first authorized person to monitor the second user.
Step 120: acquiring an audio and video acquisition process tag according to the first user and the second user;
further, the triggering mode of the audio/video acquisition process tag comprises one or more of a preset condition triggering mode, a field personnel triggering mode and a monitoring personnel triggering mode.
Specifically, in the audio and video acquisition process, according to the first user and the second user which establish the relationship between monitoring and monitored, process tag information and personnel information, such as rich tag information of time, place, starting time point, duration and the like, are added to the related audio and video, so that the audio and video can be filtered and classified quickly and efficiently in the later period, wherein the triggering mode of the audio and video acquisition process tag comprises one or more of a preset condition triggering mode, a field personnel triggering mode and a monitoring personnel triggering mode. The preset condition trigger can be that the child continuously jumps, shakes violently, laughs over a certain time length, cries, and a teacher claps or embraces the same school, and the parent A can select the preset actions in advance according to the child's own habits; the field personnel triggering mainly refers to a field teacher, a field instructor and the like, key events such as child answering, demonstration, grouping discussion, result presentation and the like can be recorded through field electronic equipment according to the practical situation of a lesson, wherein the field personnel triggering can be directly input on a processor and can also be performed in an electronic mode, for example, RFID stickers are arranged on children, and the teacher only needs to scan the stickers to start the recording of process labels; the triggering of the monitoring personnel can be realized by special monitoring personnel in a monitoring center, abnormal points of children or places with prominent expressions can be found in a manual observation mode, and under the condition that the triggering condition does not occur, a user can also effectively supplement the triggering conditions according to actual needs. When the audio/video acquisition process label is triggered, the related process information can be recorded into the whole label information of the course of the child B in the class according to the information of the initial time point, the duration, the occurrence frequency and the like, so that the technical effects of adding the audio/video label information and facilitating audio/video processing are further achieved.
Step 130: acquiring first audio and video information according to the audio and video acquisition process tag, wherein the first audio and video information is the audio and video information which accords with the process tag and is related to the second user;
further, the acquiring first audio/video information according to the audio/video acquisition process tag includes: acquiring a trigger mode of the audio and video acquisition process tag; triggering the process tag to start recording according to the triggering mode of the audio and video acquisition process tag to obtain process tag recording information; and acquiring first audio and video information according to the process tag recording information.
Specifically, after the triggering mode of the audio/video acquisition process tag is obtained, the process tag is triggered and recorded, so as to obtain process tag recording information, if the triggering mode of the audio/video acquisition process tag is triggered by a preset condition, when a child student B of a parent a participates in a class thinking class, the child student B can continuously jump, violently shake head, continuously stom, laugh and cry for a certain time, clap a teacher or embrace a college with the same school, and the like as the audio/video process tag and record the audio/video process tag, and then first audio/video information can be acquired according to the recorded process tag information, wherein the first audio/video information is audio/video information which is in line with the process tag and is related to the second user, namely, related audio/video information of a specific person with the specific process tag, wherein the audio information is obtainable by voiceprint recognition of said second user. For example, if the parent a sends the student B to participate in the thinking class (class teaching, not one-to-one), the parent a should participate in a teleconference, but wants to know the actual situation of the student B, at this time, the parent a can see the audio and video of the student B in the classroom and the relevant process labels of the teacher, the student B playing the palm, the student B hugs the student, the student B shakes violently, and the like, through the face recognition, the fingerprint recognition, the voice recognition, the iris recognition, and the like, just in front of the public screen outside the classroom after the course is finished, and the whole class thinking class is 1 hour, although the parent a cannot always sit in front of the monitor screen to watch, the parent a can only look at the audio and video of the student B in 10 minutes, and further know the class situation and the learning achievement of the student B. The audio and video processing method further achieves the technical effects of fast and efficient filtering and classification processing of the audio and video, is more humanized and meets various audio and video requirements of users.
Further, the acquiring first audio/video information according to the process tag recording information includes: the process tag recording information comprises a plurality of secondary tags, wherein each secondary tag has an audio/video set with corresponding attributes; and filtering the audio and video according to the selection of the secondary label to obtain the first audio and video information.
Specifically, in combination with the process tag recording information described above, in actual operation, the secondary tags of the process tag recording information may be embodied as behavior attributes of a student B such as continuous jumping, violent shaking head, continuous stomping, laughing and crying over a certain time, and a teacher clapping or a college, and may also select a video date, for example, approximately 3 months or approximately 1 month, where the continuous jumping of the student B may be used as one secondary tag, and the corresponding attribute of the audio/video set of the secondary tag is the continuous jumping, and then the system filters the audio/video according to the requirement of the first user for selecting the secondary tag, thereby obtaining the first audio/video information. For example, if student B learns english by way of online learning, wherein, student B can realize through electronic product such as computer, cell-phone, panel computer, here, head of a family a can set up student B online learning english's relevant audio and video information's second grade label whether correct for student B's english word pronunciation, this course is accomplished, this class back homework score is for how much and this class back homework is up to standard etc. after the system filters student B online learning english's relevant audio and video information according to the several second grade labels that head of a family a set up, student B's the situation of learning can be looked over at any time to head of a family a.
Further, after acquiring the first audio/video information according to the process tag recording information, the method includes: acquiring second audio and video information according to the process tag recording information; obtaining subscription information of the first user; judging whether the second audio and video information is matched with the subscription information; if the second audio and video information is matched with the subscription information, determining that the second audio and video information is the first audio and video information; and if the second audio and video information is not matched with the subscription information, storing the second audio and video information.
Specifically, after the process tag recording information is obtained, second audio/video information is obtained, wherein the second audio/video information is the audio/video information which accords with the process tag recording information and is related to the second user, if the first user subscribes some specific events in advance, such as the child receives the showing, the site individual display and the like, namely the subscription information indicates that the child receives the showing and the site individual display, and if the second audio/video information is matched with the subscription information, namely the second audio/video information indicates that the child receives the showing and the site individual display related audio/video, the second audio/video information is determined to be the first audio/video information; and if the second audio and video information is not matched with the subscription information, namely the second audio and video information is an audio and video which is irrelevant to the child receiving the showing and the independent field display, storing the second audio and video information into a database, and facilitating the subsequent retrieval and use.
Step 140: and processing the first audio and video information to obtain a first audio and video fragment, and sending the first audio and video fragment to the first user.
Specifically, as shown in fig. 4, after the first audio/video information is acquired, since the process tag information is added to the audio/video, the first audio/video information can be quickly and efficiently filtered and classified, a first audio/video segment is further obtained, and the first audio/video segment is sent to the first user, or sent to a third party for backup, such as a course teacher of a student, a nurse of an elderly person in an elderly care home, or a doctor mainly attending the first user, and the sent content may further include data statistics, image-text data, and the like. For example, if Mr. C father D lives in the nursing home, the heaven and the sun festival arrives, Mr. C orders a gift to be sent to the nursing home on line due to the business trip, Mr. C wants to know the scene that Mr. C receives the gift in real time, and Mr. C has already recorded the pairing relationship between Mr. C and father D in the database, then Mr. C can receive the express delivery at the first time by selecting the event of receiving the express delivery, obtain the system notice, open the live link, see the actual scene that father D opens the express delivery, and communicate with father D through audio and video conversation. The sending format of the first audio/video clip can be adjusted correspondingly based on different terminals watched by monitoring personnel, for example, a display screen outside a classroom can use a high-definition audio/video format, which is more than 720 p; if the parent chooses to watch the mobile phone, low-resolution audio and video streams can be used so as to save traffic. For another example, after student B learns english online, the teacher arranges a post-class assignment of english sentence pattern, student B needs to imitate the sentence pattern according to the prompt in the courseware, and when the pronunciation of the word reaches the standard, the system automatically sends the typing sound and video of the post-class assignment of student B to teacher and parent a. Further, the audio and video clips of specific personnel can be automatically sent, and the technical effect of humanization is achieved.
Example two
Based on the same inventive concept as the AI identification-based audio and video acquisition method in the foregoing embodiment, the present invention further provides an AI identification-based audio and video acquisition device, as shown in fig. 2, the device includes:
a first obtaining unit 11, where the first obtaining unit 11 is configured to obtain feature information and identity information of a first user and a second user, where the first user and the second user are a monitoring and monitored relationship;
a second obtaining unit 12, where the second obtaining unit 12 is configured to obtain an audio/video acquisition process tag according to the first user and the second user;
the first acquisition unit 13 is used for acquiring first audio and video information according to the audio and video acquisition process tag, wherein the first audio and video information is the audio and video information which accords with the process tag and is related to the second user;
and the third obtaining unit 14 is configured to process the first audio/video information, obtain a first audio/video fragment, and send the first audio/video fragment to the first user.
Preferably, the apparatus further comprises: the triggering mode of the audio and video acquisition process tag comprises one or more of a preset condition triggering mode, a field personnel triggering mode and a monitoring personnel triggering mode.
Preferably, the apparatus further comprises:
the first judging unit is used for judging whether the first user and the second user accord with the monitoring and monitored relation or not according to the characteristic information and the identity information of the first user and the second user;
the first execution unit is used for storing the relationship between the first user and the second user into a database to obtain an audio/video acquisition process label if the first user and the second user accord with the monitoring and monitored relationship;
and the second execution unit is used for storing the relationship between the first user and the second user into a blacklist and forbidding obtaining of an audio/video acquisition process label if the first user and the second user do not accord with the monitoring and monitored relationship.
Preferably, the first user includes:
a first rightist, said first right being a rightist having monitoring and/or monitoring rights for said second user;
a second rightist, the second right being a rightist authorized to monitor and/or monitor the second user with the first rightist.
Preferably, the first collecting unit 13 includes:
the fourth obtaining unit is used for obtaining a trigger mode of the audio and video acquisition process label;
the fifth obtaining unit is used for triggering the process tag to start recording according to the triggering mode of the audio and video acquisition process tag so as to obtain process tag recording information;
and the second acquisition unit is used for acquiring first audio and video information according to the process tag recording information.
Preferably, the second acquisition unit includes:
the first containing unit is used for recording information of the process tags and comprises a plurality of secondary tags, wherein each secondary tag has an audio/video set with corresponding attributes;
and the sixth obtaining unit is used for filtering the audio and video according to the selection of the secondary label to obtain the first audio and video information.
Preferably, the apparatus comprises:
a seventh obtaining unit, configured to obtain second audio/video information according to the process tag recording information;
an eighth obtaining unit, configured to obtain subscription information of the first user;
the second judgment unit is used for judging whether the second audio and video information is matched with the subscription information;
the first determining unit is used for determining that the second audio and video information is the first audio and video information if the second audio and video information is matched with the subscription information;
and the first storage unit is used for storing the second audio and video information if the second audio and video information is not matched with the subscription information.
Various changes and specific examples of the AI identification-based audio and video acquisition method in the first embodiment of fig. 1 are also applicable to the AI identification-based audio and video acquisition device in the present embodiment, and through the foregoing detailed description of the AI identification-based audio and video acquisition method, those skilled in the art can clearly know the implementation method of the AI identification-based audio and video acquisition device in the present embodiment, so for the brevity of the description, detailed description is not given here.
EXAMPLE III
Based on the same inventive concept as the AI-recognition-based audio and video acquisition method in the foregoing embodiments, the present invention further provides an AI-recognition-based audio and video acquisition apparatus, on which a computer program is stored, and the program, when executed by a processor, implements the steps of any one of the foregoing AI-recognition-based audio and video acquisition methods.
Where in fig. 3 a bus architecture (represented by bus 300), bus 300 may include any number of interconnected buses and bridges, bus 300 linking together various circuits including one or more processors, represented by processor 302, and memory, represented by memory 304. The bus 300 may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface 306 provides an interface between the bus 300 and the receiver 301 and transmitter 303. The receiver 301 and the transmitter 303 may be the same element, i.e., a transceiver, providing a means for communicating with various other apparatus over a transmission medium.
The processor 302 is responsible for managing the bus 300 and general processing, and the memory 304 may be used for storing data used by the processor 302 in performing operations.
Example four
Based on the same inventive concept as the AI recognition based audio/video acquisition method in the foregoing embodiments, the present invention further provides a computer-readable storage medium having a computer program stored thereon, which when executed by a processor, implements the steps of:
obtaining feature information and identity information of a first user and a second user, wherein the first user and the second user are in a monitoring and monitored relationship; acquiring an audio and video acquisition process tag according to the first user and the second user; acquiring first audio and video information according to the audio and video acquisition process tag, wherein the first audio and video information is the audio and video information which accords with the process tag and is related to the second user; and processing the first audio and video information to obtain a first audio and video fragment, and sending the first audio and video fragment to the first user.
In a specific implementation, when the program is executed by a processor, any method step in the first embodiment may be further implemented.
One or more technical solutions in the embodiments of the present application have at least one or more of the following technical effects:
the embodiment of the application provides an audio and video acquisition method and device based on AI identification, and the method comprises the following steps: obtaining feature information and identity information of a first user and a second user, wherein the first user and the second user are in a monitoring and monitored relationship; acquiring an audio and video acquisition process tag according to the first user and the second user; acquiring first audio and video information according to the audio and video acquisition process tag, wherein the first audio and video information is the audio and video information which accords with the process tag and is related to the second user; and processing the first audio and video information to obtain a first audio and video fragment, and sending the first audio and video fragment to the first user. The technical problems that audio and video cannot be classified and processed and the audio and video of a specific user cannot be obtained in the prior art are solved. By adding the audio/video label information, the audio/video is quickly and efficiently filtered and classified, the audio/video segments of specific personnel are automatically sent, the method is more humanized, and the technical effects of meeting various audio/video requirements of users are achieved.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (6)

1. An AI identification-based audio and video acquisition method is characterized by comprising the following steps:
obtaining feature information and identity information of a first user and a second user, wherein the first user and the second user are in a monitoring and monitored relationship;
obtaining an audio and video acquisition process tag according to the first user and the second user, wherein the first user comprises:
a first rightist, said first right being a rightist having monitoring and/or monitoring rights for said second user;
a second righter, said second right being a righter having authorization of said first righter to monitor and/or monitor said second user;
acquiring first audio and video information according to the audio and video acquisition process tag, wherein the first audio and video information is the audio and video information which accords with the process tag and is related to the second user, and acquiring the first audio and video information according to the audio and video acquisition process tag comprises the following steps:
acquiring a trigger mode of the audio and video acquisition process tag;
triggering the process tag to start recording according to the triggering mode of the audio and video acquisition process tag to obtain process tag recording information;
acquiring first audio and video information according to the process tag recording information; the process tag recording information comprises a plurality of secondary tags, wherein each secondary tag has an audio/video set with corresponding attributes;
according to the selection of the secondary label, filtering the audio and video to obtain the first audio and video information;
acquiring second audio and video information according to the process tag recording information;
obtaining subscription information of the first user;
judging whether the second audio and video information is matched with the subscription information;
if the second audio and video information is matched with the subscription information, determining that the second audio and video information is the first audio and video information;
if the second audio and video information is not matched with the subscription information, storing the second audio and video information;
and processing the first audio and video information to obtain a first audio and video fragment, and sending the first audio and video fragment to the first user.
2. The method of claim 1, wherein the triggering mode of the audio/video acquisition process tag comprises one or more of a preset condition triggering mode, a field personnel triggering mode and a monitoring personnel triggering mode.
3. The method of claim 1, wherein obtaining the characteristic information and the identity information of the first user and the second user comprises:
judging whether the first user and the second user accord with a monitoring and monitored relation or not according to the characteristic information and the identity information of the first user and the second user;
if the first user and the second user accord with the monitoring and monitored relation, storing the relation between the first user and the second user into a database to obtain an audio/video acquisition process label;
and if the first user and the second user do not accord with the monitoring and monitored relation, storing the relation between the first user and the second user into a blacklist, and forbidding obtaining of an audio/video acquisition process label.
4. An audio and video acquisition device based on AI discernment, its characterized in that the device includes:
the system comprises a first obtaining unit, a second obtaining unit and a monitoring and monitoring unit, wherein the first obtaining unit is used for obtaining characteristic information and identity information of a first user and a second user, and the first user and the second user are in a monitoring and monitored relationship;
a second obtaining unit, configured to obtain an audio/video acquisition process tag according to the first user and the second user, where the first user includes:
a first rightist, said first right being a rightist having monitoring and/or monitoring rights for said second user;
a second righter, said second right being a righter having authorization of said first righter to monitor and/or monitor said second user;
a first acquisition unit, the first acquisition unit is used for acquiring first audio and video information according to the audio and video acquisition process tag, wherein the first audio and video information is the audio and video information which accords with the process tag and is related to the second user, and the first acquisition unit comprises:
the fourth obtaining unit is used for obtaining a trigger mode of the audio and video acquisition process label;
the fifth obtaining unit is used for triggering the process tag to start recording according to the triggering mode of the audio and video acquisition process tag so as to obtain process tag recording information;
the second acquisition unit is used for acquiring first audio and video information according to the process tag recording information;
the second acquisition unit includes:
the first containing unit is used for recording information of the process tags and comprises a plurality of secondary tags, wherein each secondary tag has an audio/video set with corresponding attributes;
the sixth obtaining unit is used for filtering the audio and video according to the selection of the secondary label to obtain the first audio and video information;
a seventh obtaining unit, configured to obtain second audio/video information according to the process tag recording information;
an eighth obtaining unit, configured to obtain subscription information of the first user;
the second judgment unit is used for judging whether the second audio and video information is matched with the subscription information;
the first determining unit is used for determining that the second audio and video information is the first audio and video information if the second audio and video information is matched with the subscription information;
the first storage unit is used for storing the second audio and video information if the second audio and video information is not matched with the subscription information; and the third obtaining unit is used for processing the first audio and video information to obtain a first audio and video fragment and sending the first audio and video fragment to the first user.
5. An AI-recognition-based audio/video acquisition device, comprising a memory, a processor and a computer program stored on the memory and operable on the processor, characterized in that the processor implements the following steps when executing the program:
obtaining feature information and identity information of a first user and a second user, wherein the first user and the second user are in a monitoring and monitored relationship;
obtaining an audio and video acquisition process tag according to the first user and the second user, wherein the first user comprises:
a first rightist, said first right being a rightist having monitoring and/or monitoring rights for said second user;
a second righter, said second right being a righter having authorization of said first righter to monitor and/or monitor said second user;
acquiring first audio and video information according to the audio and video acquisition process tag, wherein the first audio and video information is the audio and video information which accords with the process tag and is related to the second user, and acquiring the first audio and video information according to the audio and video acquisition process tag comprises the following steps:
acquiring a trigger mode of the audio and video acquisition process tag;
triggering the process tag to start recording according to the triggering mode of the audio and video acquisition process tag to obtain process tag recording information;
acquiring first audio and video information according to the process tag recording information;
the process tag recording information comprises a plurality of secondary tags, wherein each secondary tag has an audio/video set with corresponding attributes;
according to the selection of the secondary label, filtering the audio and video to obtain the first audio and video information;
acquiring second audio and video information according to the process tag recording information;
obtaining subscription information of the first user;
judging whether the second audio and video information is matched with the subscription information;
if the second audio and video information is matched with the subscription information, determining that the second audio and video information is the first audio and video information;
if the second audio and video information is not matched with the subscription information, storing the second audio and video information; and processing the first audio and video information to obtain a first audio and video fragment, and sending the first audio and video fragment to the first user.
6. A computer-readable storage medium, on which a computer program is stored, which program, when executed by a processor, carries out the steps of:
obtaining feature information and identity information of a first user and a second user, wherein the first user and the second user are in a monitoring and monitored relationship;
obtaining an audio and video acquisition process tag according to the first user and the second user, wherein the first user comprises:
a first rightist, said first right being a rightist having monitoring and/or monitoring rights for said second user;
a second righter, said second right being a righter having authorization of said first righter to monitor and/or monitor said second user;
acquiring first audio and video information according to the audio and video acquisition process tag, wherein the first audio and video information is the audio and video information which accords with the process tag and is related to the second user, and acquiring the first audio and video information according to the audio and video acquisition process tag comprises the following steps:
acquiring a trigger mode of the audio and video acquisition process tag;
triggering the process tag to start recording according to the triggering mode of the audio and video acquisition process tag to obtain process tag recording information;
acquiring first audio and video information according to the process tag recording information; the process tag recording information comprises a plurality of secondary tags, wherein each secondary tag has an audio/video set with corresponding attributes;
according to the selection of the secondary label, filtering the audio and video to obtain the first audio and video information;
acquiring second audio and video information according to the process tag recording information;
obtaining subscription information of the first user;
judging whether the second audio and video information is matched with the subscription information;
if the second audio and video information is matched with the subscription information, determining that the second audio and video information is the first audio and video information;
if the second audio and video information is not matched with the subscription information, storing the second audio and video information; and processing the first audio and video information to obtain a first audio and video fragment, and sending the first audio and video fragment to the first user.
CN201910744211.9A 2019-08-13 2019-08-13 Audio and video acquisition method and device based on AI identification Active CN110430466B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910744211.9A CN110430466B (en) 2019-08-13 2019-08-13 Audio and video acquisition method and device based on AI identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910744211.9A CN110430466B (en) 2019-08-13 2019-08-13 Audio and video acquisition method and device based on AI identification

Publications (2)

Publication Number Publication Date
CN110430466A CN110430466A (en) 2019-11-08
CN110430466B true CN110430466B (en) 2021-10-29

Family

ID=68414350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910744211.9A Active CN110430466B (en) 2019-08-13 2019-08-13 Audio and video acquisition method and device based on AI identification

Country Status (1)

Country Link
CN (1) CN110430466B (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1578130A1 (en) * 2004-03-19 2005-09-21 Eximia S.r.l. Automated video editing system and method
CN101626494A (en) * 2009-08-14 2010-01-13 邸烁 Interactive method and device for infant playing field
CN101888539A (en) * 2010-06-25 2010-11-17 中兴通讯股份有限公司 Wireless video monitoring system and method
CN102957896A (en) * 2011-08-26 2013-03-06 中兴通讯股份有限公司 Audio and video monitoring method and audio and video monitoring device
CN104038717A (en) * 2014-06-26 2014-09-10 北京小鱼儿科技有限公司 Intelligent recording system
CN104092972A (en) * 2014-07-15 2014-10-08 北京小鱼儿科技有限公司 Communication terminal and tool installed on mobile terminal
CN105069342A (en) * 2015-08-23 2015-11-18 华南理工大学 Control method for educational resource database right based on face identification
CN105183729A (en) * 2014-05-30 2015-12-23 中国电信股份有限公司 Method and device for retrieving audio/video content
CN105323550A (en) * 2014-07-29 2016-02-10 霍尼韦尔国际公司 Video search and playback interface for vehicle monitor
CN106559654A (en) * 2016-11-18 2017-04-05 广州炫智电子科技有限公司 A kind of recognition of face monitoring collection system and its control method
CN108174154A (en) * 2017-12-29 2018-06-15 佛山市幻云科技有限公司 Long-distance video method, apparatus and server
CN108234944A (en) * 2017-12-29 2018-06-29 佛山市幻云科技有限公司 Children's monitoring method, device, server and system based on crying identification
CN108668097A (en) * 2018-05-22 2018-10-16 苏州市启献智能科技有限公司 It is a kind of to realize that the long-range child of home intercommunication checks system and method
CN108881813A (en) * 2017-07-20 2018-11-23 北京旷视科技有限公司 A kind of video data handling procedure and device, monitoring system
CN109889921A (en) * 2019-04-02 2019-06-14 北京蓦然认知科技有限公司 A kind of audio-video creation, playback method and device having interactive function
US10356192B1 (en) * 2007-10-22 2019-07-16 Alarm.Com Incorporated Providing electronic content based on sensor data
CN110022451A (en) * 2019-04-18 2019-07-16 环爱网络科技(上海)有限公司 For generating the method and system of sub-video and being stored with the medium of corresponding program

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1578130A1 (en) * 2004-03-19 2005-09-21 Eximia S.r.l. Automated video editing system and method
US10356192B1 (en) * 2007-10-22 2019-07-16 Alarm.Com Incorporated Providing electronic content based on sensor data
CN101626494A (en) * 2009-08-14 2010-01-13 邸烁 Interactive method and device for infant playing field
CN101888539A (en) * 2010-06-25 2010-11-17 中兴通讯股份有限公司 Wireless video monitoring system and method
CN102957896A (en) * 2011-08-26 2013-03-06 中兴通讯股份有限公司 Audio and video monitoring method and audio and video monitoring device
CN105183729A (en) * 2014-05-30 2015-12-23 中国电信股份有限公司 Method and device for retrieving audio/video content
CN104038717A (en) * 2014-06-26 2014-09-10 北京小鱼儿科技有限公司 Intelligent recording system
CN104092972A (en) * 2014-07-15 2014-10-08 北京小鱼儿科技有限公司 Communication terminal and tool installed on mobile terminal
CN105323550A (en) * 2014-07-29 2016-02-10 霍尼韦尔国际公司 Video search and playback interface for vehicle monitor
CN105069342A (en) * 2015-08-23 2015-11-18 华南理工大学 Control method for educational resource database right based on face identification
CN106559654A (en) * 2016-11-18 2017-04-05 广州炫智电子科技有限公司 A kind of recognition of face monitoring collection system and its control method
CN108881813A (en) * 2017-07-20 2018-11-23 北京旷视科技有限公司 A kind of video data handling procedure and device, monitoring system
CN108174154A (en) * 2017-12-29 2018-06-15 佛山市幻云科技有限公司 Long-distance video method, apparatus and server
CN108234944A (en) * 2017-12-29 2018-06-29 佛山市幻云科技有限公司 Children's monitoring method, device, server and system based on crying identification
CN108668097A (en) * 2018-05-22 2018-10-16 苏州市启献智能科技有限公司 It is a kind of to realize that the long-range child of home intercommunication checks system and method
CN109889921A (en) * 2019-04-02 2019-06-14 北京蓦然认知科技有限公司 A kind of audio-video creation, playback method and device having interactive function
CN110022451A (en) * 2019-04-18 2019-07-16 环爱网络科技(上海)有限公司 For generating the method and system of sub-video and being stored with the medium of corresponding program

Also Published As

Publication number Publication date
CN110430466A (en) 2019-11-08

Similar Documents

Publication Publication Date Title
US20150279426A1 (en) Learning Environment Systems and Methods
WO2016183959A1 (en) Distance education system and method thereof
Boehm et al. Community theatre as a means of empowerment in social work: A case study of women’s community theatre
CN109040154A (en) A kind of teaching resource data management system for internet learning platform
CN109637233A (en) A kind of method and system of intelligent tutoring
CN109379350A (en) Schedule table generating method, device, equipment and computer readable storage medium
CN110223540A (en) A kind of learning robot information interaction learning method and system and storage medium
KR20190031128A (en) Method And Apparatus for Providing Speech Therapy for Developmental Disability Child
CN110675674A (en) Online education method and online education platform based on big data analysis
CN109377802A (en) A kind of automatic and interactive intellectual education system and method
Parisi et al. Tactile temporalities: The impossible promise of increasing efficiency and eliminating delay through haptic media
Varenne et al. The Farrells and the Kinneys at home: Literacies in action
CN110430466B (en) Audio and video acquisition method and device based on AI identification
CN109886255A (en) A kind of intelligent interaction tutoring system with facial emotions identification function
Petts et al. On supervision: Psychoanalytic and Jungian analytic perspectives
Ivanov NFC-based pervasive learning service for children
WO2023079370A1 (en) System and method for enhancing quality of a teaching-learning experience
Yue et al. Application of data mining for young children education using emotion information
CN112333552A (en) Intelligent control method for video playing, terminal equipment and readable storage medium
JP2003280507A (en) Learning support method and learning support program
Bateman Children's co-construction of context: prosocial and antisocial behaviour revisited
Hidalgo et al. Pushed and Pulled: Understanding Contradictions in IEPS During a Pandemic
CN113658464A (en) Network education distributed management system
Lubis et al. Study of The Problem of Gadget Addiction in Elementary School Students and Strategies for Handling IT
Weinhardt Digital Commons@ West Chester Universit y

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant