CN116527828A - Image processing method and device, electronic equipment and readable storage medium - Google Patents

Image processing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN116527828A
CN116527828A CN202310363832.9A CN202310363832A CN116527828A CN 116527828 A CN116527828 A CN 116527828A CN 202310363832 A CN202310363832 A CN 202310363832A CN 116527828 A CN116527828 A CN 116527828A
Authority
CN
China
Prior art keywords
interest
video stream
region
image
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310363832.9A
Other languages
Chinese (zh)
Inventor
赵玉瑶
黄远东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Eswin Computing Technology Co Ltd
Original Assignee
Beijing Eswin Computing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Eswin Computing Technology Co Ltd filed Critical Beijing Eswin Computing Technology Co Ltd
Priority to CN202310363832.9A priority Critical patent/CN116527828A/en
Publication of CN116527828A publication Critical patent/CN116527828A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the invention provides an image processing method, an image processing device, electronic equipment and a readable storage medium, and belongs to the technical field of multimedia. Firstly, an original video stream is obtained; extracting interest areas of image frames of the video stream according to a preset extraction rule to obtain a plurality of interest areas; inputting the contents of the multiple interest areas into a content processing model to obtain a guide weight value for each interest area; and in the process of playing the video stream, conducting the guiding and broadcasting processing on the image content in the target interest area of which the guiding and broadcasting weight value meets the preset condition. For the guide processing of the whole video stream, all contents of the whole video picture are not completely calculated, a plurality of different interest areas are preferentially divided according to the guide weight value, and only partial key contents in the interest areas are analyzed, calculated and other operations are performed, so that the calculation process of non-interest areas is omitted, the calculated amount of image data is reduced, and the application effect of the guide processing is better.

Description

Image processing method and device, electronic equipment and readable storage medium
Technical Field
The present invention relates to the field of multimedia technologies, and in particular, to an image processing method, an image processing device, an electronic device, and a readable storage medium.
Background
The guide broadcasting is used as a digital image media technology and has important application in the fields of television programs, scene variety production and the like. In the education field at present, along with the continuous perfection of intelligent equipment infrastructure, the broadcasting guide technology gradually shows technical advantages in application scenes such as course live broadcasting, course recording broadcasting and the like in an intelligent classroom.
In the prior art, an intelligent classroom guided broadcasting system established based on an artificial intelligent algorithm can identify and capture the areas with scene changes in the photographed classroom pictures through a deep learning model, and give the areas close-up in a guided broadcasting picture mode so as to record the generated wonderful contents at a more focused visual angle.
However, in the existing scheme, the related algorithm needs to analyze and calculate all images in the monitoring picture in the process of content identification and capturing, so that the analysis and calculation amount is too large, and the application effect of the guide processing is poor.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device, electronic equipment and a readable storage medium, which are used for solving the problem that in the prior art, when content identification and capturing are carried out on an image for conducting broadcasting processing, the application effect of conducting broadcasting processing is poor due to overlarge analysis and calculation amount.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
acquiring an original video stream;
extracting the interest areas of the image frames of the video stream according to a preset extraction rule to obtain a plurality of interest areas;
inputting the contents of a plurality of interest areas extracted from the image frames into a content processing model to obtain a guide weight value for each interest area;
and in the process of playing the video stream, conducting the guiding and broadcasting processing on the image content in the target interest area of which the guiding and broadcasting weight value meets the preset condition.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including:
the video stream acquisition module is used for acquiring an original video stream;
the interest region acquisition module is used for extracting the interest regions from the image frames of the video stream according to a preset extraction rule to obtain a plurality of interest regions;
the broadcasting guiding weight value determining module is used for inputting the content of the interest areas extracted from the image frames into a content processing model to obtain a broadcasting weight value for each interest area;
And the guiding and broadcasting processing execution module is used for guiding and broadcasting the image content in the target interest area of which the guiding and broadcasting weight value meets the preset condition in the process of playing the video stream.
In a third aspect, an embodiment of the present invention provides an electronic device, including: a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method.
In a fourth aspect, embodiments of the present invention provide a readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the method.
In the embodiment of the invention, an original video stream is firstly obtained; extracting interest areas of image frames of the video stream according to a preset extraction rule to obtain a plurality of interest areas; inputting the contents of a plurality of interest areas extracted from the image frames into a content processing model to obtain a guide weight value for each interest area; and in the process of playing the video stream, conducting the guiding and broadcasting processing on the image content in the target interest area of which the guiding and broadcasting weight value meets the preset condition. For the guide processing of the whole video stream, all contents of the whole video picture are not completely calculated, a plurality of different interest areas are preferentially divided according to the guide weight value, and only partial key contents in the interest areas are analyzed, calculated and other operations are performed, so that the calculation process of non-interest areas is omitted, the calculated amount of image data is reduced, and the application effect of the guide processing is better.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 is a schematic step implementation flow chart of an image processing method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a teacher-side multicast video screen according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a student end director video picture provided in an embodiment of the present invention;
FIG. 4 is a flowchart illustrating the complete steps of an image processing method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of region of interest extraction dynamics provided by an embodiment of the present invention;
FIG. 6 is a schematic diagram of a region frame size adjustment process according to an embodiment of the present invention;
FIG. 7 is a logic block diagram of an image processing method according to an embodiment of the present invention;
FIG. 8 is a diagram of an implementation effect of content emphasis reality provided by an embodiment of the present invention;
FIG. 9 is a diagram of another implementation effect of content emphasis reality provided by an embodiment of the invention;
FIG. 10 is a diagram of another implementation effect of content emphasis reality provided by an embodiment of the invention;
fig. 11 is a schematic diagram showing functional components of an image processing apparatus according to an embodiment of the present invention;
FIG. 12 is a functional component relationship diagram of an electronic device according to an embodiment of the present invention;
fig. 13 is a functional component relationship diagram of another electronic device according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are illustrated in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Referring to fig. 1, a schematic step implementation flowchart of an image processing method according to an embodiment of the present invention is shown. As shown in fig. 1, the steps of the method include:
step 101: an original video stream is acquired.
The image processing method provided by the embodiment of the invention is used for guiding and broadcasting the video in the forms of live broadcast, recorded broadcast and the like, and firstly, the video material to be processed, namely the original video stream, needs to be acquired. In the implementation process, since a single machine position can record a limited scene frame, and a plurality of different view angles cannot be shot at the same time in a short time, a video stream to be processed generally originates from a plurality of image capturing devices.
The image processing method provided by the embodiment of the invention is mainly applied to an intelligent classroom broadcasting guiding system based on artificial intelligence (AI, artificial Intelligence), and each student and teacher in the classroom can be accurately judged. It is therefore required that the director's shots be able to switch between classroom panoramas and regions of interest (ROIs, region of Interest) with content value, thereby enabling fixed-position director's processing. In this process, the position of the guide-write can be made, as some fixed ROI area specified according to the actual scene.
Step 102: and extracting the interest areas of the image frames of the video stream according to a preset extraction rule to obtain a plurality of interest areas.
And reading and analyzing the acquired video stream image data, and dividing different areas according to the ROI.
For example: in the embodiment of the invention, aiming at the intelligent classroom with the broadcasting guiding system, the intelligent classroom comprises the acquired video stream facing the students and the video stream facing the teachers. Referring to fig. 2, a schematic diagram of a teacher-side multicast video picture according to an embodiment of the present invention is shown; as shown in fig. 2, the current view includes 4 generated interest areas, namely, a display screen area 201, a first blackboard area 202, a second blackboard area 203, and an area 204 where a teacher is located, except for a podium. Wherein the size of the display area 201 is slightly larger than the display 211.
Referring also to fig. 3, a schematic diagram of a student end director video picture provided by an embodiment of the present invention is shown; as shown in fig. 3, the center area of the current view is a student seat, and two sides are aisles, and the center part includes a plurality of student areas, namely a second student area 302, a third student area 303, a fifth student area 305, a sixth student area 306, and a seventh student area 307. In addition, there are a first student area 301 and a fourth student area 304 on both sides of the screen.
Notably, there may be regions of interest that overlap with each other. The region of interest division shown in fig. 2 and 3 is defined by the developer according to the special scenario of the intelligent classroom. In the image processing method provided by the embodiment of the invention, when the acquired video stream is subjected to image analysis, the acquired interest area can be adjusted according to different broadcasting scenes, and the embodiment of the invention is not limited herein.
Step 103: and inputting the contents of the plurality of interest areas extracted from the image frames into a content processing model to obtain a guide weight value for each interest area.
After a plurality of interest areas are obtained according to the extraction of the video stream, inputting the image content in the interest areas into a content processing model for evaluation, and obtaining the guiding weight value for each interest area. The multicast weight value is used to evaluate a priority attention of events occurring within the region of interest.
Since the director's close-up is intended to give a close-up to a certain region image in the overall video stream panorama picture, how to select a region of interest worth focusing on among multiple regions of interest is considered according to different scene characteristics.
Specifically, for example, in the teaching process, most of the time, the activity of the whole classroom is focused on the teacher side. The teaching content is taken as the focus of attention: the teaching content of a teacher on a podium, a written blackboard-writing for writing, or a slide teaching material shown on a display screen and the like belong to a part of the teaching content. In this process, since the rhythm of teaching activities in the classroom is mainly mastered by the teacher, the events to be focused at the same time include only one of the above scenes.
Further, if the contents displayed by the teaching board book and the slide show are not changed within a certain time period, and the teaching contents are mainly conveyed by the teacher through dictation, the region with the most attention value at the moment is the region where the teacher is located, so that the guiding feature value of the region of interest is higher. And if new contents are added in the teaching blackboard writing or the display page of the slide is changed, the attention value of the region where the two events are located is improved. In addition, for students, if students take hands to speak and answer questions while teaching is in progress, the guidance priority for the region where the speaking students are located is also higher.
Step 104: and in the process of playing the video stream, conducting the guiding and broadcasting processing on the image content in the target interest area of which the guiding and broadcasting weight value meets the preset condition.
And determining a target interest area meeting preset conditions in the multiple interest areas according to the obtained pilot weight value, and conducting pilot processing. In the embodiment of the invention, a corresponding weight threshold is preset for the pilot weight value, and the interest region is determined as a target interest region under the condition that the pilot weight value obtained by determining the interest region exceeds the weight threshold.
Specifically, for example, the number of weight thresholds is set to 65, the broadcasting weight value obtained in the area 204 where the teacher is located is 70, the display screen area 201 is 60, and the first blackboard area 202 and the second blackboard area 203 are 55 in 4 interest areas at the teacher side. The region 204 of the current video stream where the teacher is located is determined to be the target region of interest.
There are various specific implementation methods for conducting the guiding process on the target region of interest, for example, enlarging the picture of the target region of interest, so that the whole display device only plays the content in the target region of interest in the viewing process.
It should be noted that the guiding process is not limited to continuously selecting a certain interest area for guiding during the playing of the video stream. And switching among a plurality of interest areas and panoramic pictures of the whole scene, and guiding important contents to be focused by a viewer according to the focus value of the event in each area and the rhythm of the activity of the whole scene by using a lens language.
In summary, in the image processing method provided by the embodiment of the present invention, an original video stream is first obtained; extracting interest areas of image frames of the video stream according to a preset extraction rule to obtain a plurality of interest areas; inputting the contents of a plurality of interest areas extracted from the image frames into a content processing model to obtain a guide weight value for each interest area; and in the process of playing the video stream, conducting the guiding and broadcasting processing on the image content in the target interest area of which the guiding and broadcasting weight value meets the preset condition. For the guide processing of the whole video stream, all contents of the whole video picture are not completely calculated, a plurality of different interest areas are preferentially divided according to the guide weight value, and only partial key contents in the interest areas are analyzed, calculated and other operations are performed, so that the calculation process of non-interest areas is omitted, the calculated amount of image data is reduced, and the application effect of the guide processing is better.
Referring to fig. 4, a flowchart illustrating a complete step implementation of an image processing method according to an embodiment of the present invention is shown; as shown in fig. 4, the steps of the method include:
step 401: an original video stream is acquired.
The step may refer to the above step 101, and this embodiment is not described herein.
Optionally, in an embodiment, the step 401 may specifically include:
sub-step 4011: and acquiring the student end video stream taking the position of the student as the visual angle and the teacher end video stream taking the position of the teacher as the visual angle.
Referring to fig. 2 and 3, a teacher end video stream display screen using a teacher location as a viewing angle and a student end video stream display screen using a student location as a viewing angle are respectively shown.
It should be noted that, the video stream is just the video material obtained in the application scenario for the intelligent classroom provided by the embodiment of the present invention. In practical applications, the video stream is also provided with various other viewing angles, which is not limited herein.
Step 402: and extracting the interest areas of the image frames of the video stream according to a preset extraction rule to obtain a plurality of interest areas.
The step may refer to the step 102, and the embodiment is not described herein.
Optionally, in an embodiment, the step 402 may specifically include:
sub-step 4021: and carrying out region extraction on the student side video stream to obtain a plurality of interest regions comprising image frames of the student side video stream, and carrying out region extraction on the teacher side video stream to obtain a plurality of interest regions comprising image frames of the teacher side video stream.
As shown in fig. 2, the current view includes 4 generated interest areas, namely, a display screen area 201, a first blackboard area 202, a second blackboard area 203, and an area 204 where a teacher is located, except for a podium. Wherein the size of the display area 201 is slightly larger than the display 211.
As also shown in fig. 3, the center area of the current view is a student seat, two sides are hallways, and the center part includes a plurality of student areas, namely, a second student area 302, a third student area 303, a fifth student area 305, a sixth student area 306, and a seventh student area 307. In addition, there are a first student area 301 and a fourth student area 304 on both sides of the screen.
In an alternative embodiment, the substep 4021 may further include:
sub-step 40211: and carrying out object recognition on the image frame to obtain an object recognition frame.
For the extraction process of the region of interest, the extraction rules preset by the developer are mainly followed in the embodiment of the invention. For the application scenario of the smart classroom, the panoramic views of the teacher side and the student side are generally fixed views, and the types of events to be displayed are limited, for example, the content in 4 interest areas shown in fig. 2, so that the broadcasting requirement under the scenario can be satisfied by adopting a preset fixed area.
As shown in fig. 2, the current view includes 4 generated interest areas, including a display screen area 201, a first blackboard area 202, a second blackboard area 203, and an area 204 where a teacher is located. The geometric dimension of the display screen and the blackboard is a fixed dimension, and the relative position of the display screen and the blackboard in the visual angle of the teacher end is fixed, so that the object recognition frame can be obtained directly by recognizing the geometric edge of the object, and the object recognition frame is taken as the region edge line of the region of interest. In addition, since the movement of the person has uncertainty in the classroom area 204 indicating the teacher position, the use of the fixed area lacks versatility, and it is necessary to identify the person and then use the obtained object identification frame as the edge line of the region of interest.
Sub-step 40212: and responding to the selection operation of the object recognition frame, and selecting the region where part of the object recognition frame is positioned as the region of interest.
And after object recognition is carried out on the pictures of the video stream to obtain an object recognition frame, selecting an area where at least part of the object recognition frame is located as an interest area according to the actual application scene. For example, referring to fig. 5, a dynamic diagram of region of interest extraction provided by an embodiment of the present invention is shown. The object recognition frames are generated by 6 students of two adjacent rows of seats through object recognition, and 6 object recognition frames in the range are selected to form an interest area on the basis of the object recognition frames, and the interest area completely wraps the generated object recognition frames as shown by a dotted line frame.
Optionally, in an embodiment, the number of the regions of interest is proportional to the density of the object recognition frame. Specifically, for students, the greater the number of students' population density in the classroom, the greater the corresponding number of regions of interest extracted. In this process, even if the overlapping degree of some regions of interest is too high, regions of interest with identical sizes and positions do not exist.
Sub-step 4022: and extracting the interest area of the image frame at the first moment in the video stream according to a preset extraction rule, and obtaining the interest area of the image frame at the first moment.
For the streaming processing of a video stream, it is necessary to analyze all the content in the entire video stream. The same region of interest as proposed for the above steps also requires extraction for the entire video stream.
Specifically, in the embodiment of the present invention, since the video stream is formed by playing a plurality of still image frames according to a time sequence, firstly, the region of interest of the image frame at the first moment in the video stream is extracted according to a preset extraction rule, and the region of interest of the image frame at the first moment is obtained. The specific results refer to fig. 2 and 3, and the details of this embodiment are not repeated here.
Sub-step 4023: and mapping the contour position of the region of interest in the image frame at the first moment to the image frames at all other moments in the video stream to obtain the region of interest of each image frame.
After the region of interest for the first frame image is obtained, because the panoramic picture size of the whole video stream is the same, the contour position of the region of interest in the first moment image frame can be directly mapped to all other moment image frames in the video stream, and the size and position of the mapped region of interest are completely the same as those of the region of interest in the first moment image frame, so that the region of interest of each image frame can be obtained.
Step 403: and respectively carrying out size adjustment on the plurality of interest areas, so that the picture sizes of the interest areas are respectively adjusted to the same preset picture size.
Referring to fig. 6, a schematic diagram of a region screen size adjustment process according to an embodiment of the present invention is shown. As shown in fig. 6, the current video stream picture contains two extracted regions of interest: region of interest-1 and region of interest-2. Before inputting the content of the region of interest into the content processing model, the system also needs to adjust the size of the picture in the region of interest.
Specifically, since the regions of interest are extracted by the preset extraction rules in step 402, it is not mandatory to define that the sizes of all the regions of interest are the same in view of flexibility in region extraction. When the interest areas are further processed, the adopted processing model algorithm has uniformity, and after the picture sizes of all the interest areas are adjusted to the same size, the parallel calculation of subsequent content processing is facilitated, and unnecessary resource waste is reduced.
Referring to fig. 6, after the image content of the region of interest is resized, the images of the individual regions of interest are combined side-by-side into an image package. Thanks to the parallel design of the graphics processor (GPU, graphics Processing Unit) on hardware, the use of parallel data packets can reduce the number of layers to be processed and reduce the computation time.
Step 404: and inputting the contents of the plurality of interest areas extracted from the image frames into a content processing model to obtain a guide weight value for each interest area.
The step may refer to the step 103, and the description of this embodiment is omitted here.
Optionally, in an embodiment, the multicast weight value of the region of interest is proportional to the multicast priority of the image content in the region of interest.
Step 405: and in the process of playing the video stream, conducting the guiding and broadcasting processing on the image content in the target interest area of which the guiding and broadcasting weight value meets the preset condition.
The step may refer to the step 104, and the description of this embodiment is omitted here.
Optionally, in one embodiment, the step 405 may specifically include:
substep 4051: and sequencing the interest areas according to the broadcasting weight value to obtain an interest area score sequence.
After the pilot weight values of a plurality of interest areas are obtained through the content processing model, the interest areas are simply sorted according to the numerical values of the pilot weight values, and the interest area score sequence is obtained. In general, for the same video stream at the same time, there is only one interest area capable of adopting the guide processing, so that the target interest area meeting the preset condition needs to be screened according to the guide weight value.
Substep 4052: and in the interest region scoring sequence, under the condition that the broadcasting weight value of the interest region is larger than a preset weight threshold value, determining the interest region with the broadcasting weight value larger than the weight threshold value as a target interest region.
In an optional embodiment, in the region of interest score sequence, if it is determined that the multicast weight values of a plurality of the regions of interest are all greater than the weight threshold, the method further includes:
sub-step 40521: obtaining a content event type of an image frame in the region of interest and an event priority corresponding to the content event type; the content event type is used to characterize events occurring within the target region of interest.
In the embodiment of the invention, any region of interest with the acquired pilot weight value exceeding the preset weight threshold is considered to be qualified for pilot. However, in practical application, it is inevitable that the multicast weight values of multiple interest areas all exceed the preset weight threshold at the same time, so that further selection is needed.
Content events occurring in different regions of interest are different, and different event priorities can be artificially set for the different content events. For example, for a student scenario, the order of priority attention may be set: the answer question- > hand-lifting illustration- > carefully listen to talk- > low head small action. This enables selection of more interesting content events in a plurality of regions of interest that qualify for the director process.
Sub-step 40522: and determining the interest area with the highest event type priority as a target interest area in all the interest areas with the guide weight values larger than the weight threshold, and carrying out guide processing on image contents in the target interest area.
And an accepting sub-step 40521, wherein the interest area with the highest event type priority is determined as the target interest area in all the interest areas with the lead weight value larger than the weight threshold, and the image content in the target interest area is led.
Substep 4053: and conducting guiding processing on the image content in the target interest area.
Referring to fig. 7, a logic block diagram of an image processing method according to an embodiment of the present invention is shown. Firstly, acquiring an original video stream through a video acquisition device (generally, camera equipment), then extracting an interest region according to a preset extraction rule, and after size adjustment is carried out on pictures of a plurality of obtained interest regions, evaluating the content of the interest region by utilizing a content processing model to generate a guide weight value. Selecting according to the broadcasting guiding weight values obtained by each interest region, and taking the interest region conforming to the preset rule as a target interest region to carry out close-up output.
There are a number of different methods of processing for the specific implementation of the multicast processing.
Optionally, in an embodiment, the substep 4053 may further include:
sub-step 40531: taking the geometric center of the target region of interest as a target amplification center; and in the process of playing the video stream, taking the target amplification center as a reference, uniformly amplifying the image content in the target interest area to a preset size in a first duration for display.
Referring to fig. 8, a diagram of an implementation effect of content emphasis reality provided by an embodiment of the present invention is shown. In the guide process, the determined target region of interest can be uniformly enlarged, so that the target region of interest has a larger display area in the whole display window, and the purpose of completely focusing the sight of the viewer on the content in the target display region is achieved.
Substep 40532: and highlighting the edge of the target region of interest in the process of playing the video stream.
Referring to fig. 9, another implementation effect diagram of the content emphasis reality provided by the embodiment of the invention is shown. In addition, as the edge of the target region of interest is a clear geometric line segment, the aim of highlighting can be achieved in a highlighting display mode.
Sub-step 40533: generating a plurality of display windows which are suspended on the upper layer of the video stream image content according to a plurality of the interest areas; the window size and the position of the display window are in one-to-one correspondence with the interest areas of which the guide weight values meet the preset conditions; and respectively conducting guide processing on the image contents of the interest areas through the display windows.
Referring to fig. 10, another implementation effect diagram of the content emphasis reality provided by the embodiment of the invention is shown. A floating display window may be generated at a location where multiple regions of interest are located to emphasize content within the regions of interest. This approach combines the lead-in processing for the region of interest with the full picture display of the video stream.
Sub-step 40534: dividing a current playing window into a first display area and a second display area which are mutually independent; playing the student end video stream through the first display area, and conducting guide processing on the content of the interest area in the student end video stream with the guide weight value meeting the preset condition; and playing the teacher side video stream through the second display area, and conducting guided broadcasting processing on the content of the interesting area in the teacher side video stream with the guided broadcasting weight value meeting the preset condition.
In addition, in the embodiment of the invention, the video streams of the student end and the teacher end can be split-screen displayed in consideration of the fact that the acquired video streams come from multiple ends. Specifically, the play window is divided into a first display area and a second display area which are independent of each other.
In the process of playing student side video streams through the first display area, conducting guided-broadcasting processing on the content of the interest area in the student side video streams with the guided-broadcasting weight value meeting the preset condition; and simultaneously, playing the teacher side video stream through the second display area, and conducting guiding and broadcasting processing on the content of the interesting area in the teacher side video stream with the guiding and broadcasting weight value meeting the preset condition.
Sub-step 40535: in the process of displaying the student end video stream through the first display device, conducting guide processing on the content of the interest area in the student end video stream, wherein the guide weight value of the content meets the preset condition; and in the process of displaying the teacher side video stream through the second display equipment, conducting guided broadcasting processing on the content of the interesting area in the teacher side video stream with the guided broadcasting weight value meeting the preset condition.
Similarly, in the comparing sub-step 40534, the video stream may be played through the playing window corresponding to the number of video streams, and the content of the region of interest with the guiding weight value meeting the preset condition may be focused and displayed. Specifically, in the process of displaying the student end video stream through the first display device, conducting guided broadcasting processing on the content of the interest area in the student end video stream with the guided broadcasting weight value meeting the preset condition; and in the process of displaying the teacher-side video stream through the second display equipment, conducting the guiding and broadcasting processing on the content of the interesting area in the teacher-side video stream with the guiding and broadcasting weight value meeting the preset condition.
In summary, in the image processing method provided by the embodiment of the present invention, an original video stream is first obtained; extracting interest areas of image frames of the video stream according to a preset extraction rule to obtain a plurality of interest areas; inputting the contents of a plurality of interest areas extracted from the image frames into a content processing model to obtain a guide weight value for each interest area; and in the process of playing the video stream, conducting the guiding and broadcasting processing on the image content in the target interest area of which the guiding and broadcasting weight value meets the preset condition. For the guide processing of the whole video stream, all contents of the whole video picture are not completely calculated, a plurality of different interest areas are preferentially divided according to the guide weight value, and only partial key contents in the interest areas are analyzed, calculated and other operations are performed, so that the calculation process of non-interest areas is omitted, the calculated amount of image data is reduced, and the application effect of the guide processing is better.
Referring to fig. 11, a schematic diagram of the functional components of an image processing apparatus according to an embodiment of the present invention is shown. The device comprises:
the video stream obtaining module 501 is configured to obtain an original video stream.
The region of interest obtaining module 502 is configured to extract a region of interest from an image frame of the video stream according to a preset extraction rule, so as to obtain a plurality of regions of interest.
The multicast weight value determining module 503 is configured to input the content of the plurality of regions of interest extracted from the image frame into a content processing model, and obtain a multicast weight value for each region of interest.
And the guiding process executing module 504 is configured to, during the playing of the video stream, perform guiding process on the image content in the target interest area where the guiding weight value meets the preset condition.
Optionally, the apparatus further includes:
and the size adjusting module is used for respectively adjusting the sizes of the plurality of the interest areas before the contents of the plurality of the interest areas extracted from the image frames are input into the content processing model, so that the picture sizes of the plurality of the interest areas are adjusted to the same preset picture size.
Optionally, the region of interest obtaining module 502 further includes:
the object recognition sub-module is used for carrying out object recognition on the image frames to obtain object recognition frames;
and the interest region generation sub-module is used for responding to the selection operation of the object identification frame and selecting the region where part of the object identification frame is positioned as the interest region.
Optionally, the region of interest obtaining module 502 further includes:
the first frame interest region extraction sub-module is used for extracting the interest region of the image frame at the first moment in the video stream according to a preset extraction rule to obtain the interest region of the image frame at the first moment;
and the complete region of interest extraction sub-module is used for mapping the contour position of the region of interest in the image frame at the first moment to the image frames at all other moments in the video stream to obtain the region of interest of each image frame.
The video stream interest region extraction sub-module is used for extracting regions of the student side video stream to obtain a plurality of interest regions comprising image frames of the student side video stream, and extracting regions of the teacher side video stream to obtain a plurality of interest regions comprising image frames of the teacher side video stream.
Optionally, the multicast processing executing module 504 further includes:
the score sequence generation sub-module is used for sequencing the interest areas according to the broadcasting guide weight value to obtain an interest area score sequence;
the target region of interest determining submodule is used for determining a region of interest with a guide weight value larger than a preset weight threshold value as a target region of interest under the condition that the guide weight value of the region of interest is larger than the preset weight threshold value in the region of interest scoring sequence;
And the emphasis display execution sub-module is used for conducting guide processing on the image content in the target interest area.
Optionally, the target region of interest determination submodule may further include:
a content event feature determining unit, configured to obtain a content event type of an image frame in the region of interest and an event priority corresponding to the content event type; the content event type is used for characterizing events occurring in the target region of interest;
and the target interest region determining unit is used for determining the interest region with the highest event type priority as a target interest region in all the interest regions with the guide weight value larger than the weight threshold, and carrying out guide processing on image contents in the target interest region.
Optionally, the emphasis-presentation execution sub-module may further include:
the amplification center determining unit is used for taking the geometric center of the target region of interest as a target amplification center;
and the image amplification execution unit is used for uniformly amplifying the image content in the target interest area to a preset size for display in a first duration by taking the target amplification center as a reference in the process of playing the video stream.
Optionally, the emphasis-presentation execution sub-module may further include:
and the highlighting execution unit is used for highlighting the edge of the target region of interest in the process of playing the video stream.
Optionally, the emphasis-presentation execution sub-module may further include:
a floating window generating unit, configured to generate a plurality of display windows floating on an upper layer of the video stream image content according to a plurality of the regions of interest; the window size and the position of the display window are in one-to-one correspondence with the interest areas of which the guide weight values meet the preset conditions;
and the multi-window display execution unit is used for respectively conducting guide processing on the image contents of the interest areas through the display windows.
Optionally, the emphasis-presentation execution sub-module may further include:
the display window dividing unit is used for dividing the current playing window into a first display area and a second display area which are mutually independent;
the first highlighting execution unit is used for playing the student end video stream through the first display area and conducting guide processing on the content of the interest area in the student end video stream of which the guide weight value meets the preset condition;
And the second highlighting execution unit is used for playing the teacher video stream through the second display area and conducting guiding processing on the content of the interesting area in the teacher video stream with the guiding weight value meeting the preset condition.
Optionally, the emphasis-presentation execution sub-module may further include:
the third emphasis display execution unit is used for conducting the guide processing on the content of the interest area in the student end video stream with the guide weight value meeting the preset condition in the process of displaying the student end video stream through the first display device;
and the fourth emphasis display execution unit is used for conducting the broadcasting processing on the content of the interest area in the teacher-side video stream with the broadcasting weight value meeting the preset condition in the process of displaying the teacher-side video stream through the second display equipment.
Optionally, the video stream obtaining module 501 further includes:
the video stream acquisition sub-module is used for acquiring student end video streams taking the position of a student as a visual angle and teacher end video streams taking the position of a teacher as the visual angle.
In summary, in the image processing apparatus provided in the embodiment of the present invention, an original video stream is first obtained; extracting interest areas of image frames of the video stream according to a preset extraction rule to obtain a plurality of interest areas; inputting the contents of a plurality of interest areas extracted from the image frames into a content processing model to obtain a guide weight value for each interest area; and in the process of playing the video stream, conducting the guiding and broadcasting processing on the image content in the target interest area of which the guiding and broadcasting weight value meets the preset condition. For the guide processing of the whole video stream, all contents of the whole video picture are not completely calculated, a plurality of different interest areas are preferentially divided according to the guide weight value, and only partial key contents in the interest areas are analyzed, calculated and other operations are performed, so that the calculation process of non-interest areas is omitted, the calculated amount of image data is reduced, and the application effect of the guide processing is better.
Fig. 12 is a block diagram of an electronic device 600, according to an example embodiment. For example, the electronic device 600 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 12, the electronic device 600 may include one or more of the following components: a processing component 602, a memory 604, a power component 606, a multimedia component 608, an audio component 610, an input/output (I/O) interface 612, a sensor component 614, and a communication component 616.
The processing component 602 generally controls overall operation of the electronic device 600, such as operations associated with presentation, telephone calls, data communications, camera operations, and recording operations. The processing component 602 may include one or more processors 620 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components. For example, the processing component 602 may include a multimedia module to facilitate interaction between the multimedia component 608 and the processing component 602.
The memory 604 is used to store various types of data to support operations at the electronic device 600. Examples of such data include instructions for any application or method operating on the electronic device 600, contact data, phonebook data, messages, pictures, multimedia, and so forth. The memory 604 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 606 provides power to the various components of the electronic device 600. The power supply components 606 can include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 600.
The multimedia component 608 includes a screen between the electronic device 600 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may not only sense demarcations of touch or sliding actions, but also detect durations and pressures associated with the touch or sliding operations. In some embodiments, the multimedia component 608 includes a front camera and/or a rear camera. When the electronic device 600 is in an operational mode, such as a shooting mode or a multimedia mode, the front-facing camera and/or the rear-facing camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 610 is for outputting and/or inputting audio signals. For example, the audio component 610 includes a Microphone (MIC) for receiving external audio signals when the electronic device 600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 604 or transmitted via the communication component 616. In some embodiments, audio component 610 further includes a speaker for outputting audio signals.
The I/O interface 612 provides an interface between the processing component 602 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 614 includes one or more sensors for providing status assessment of various aspects of the electronic device 600. For example, the sensor assembly 614 may detect an on/off state of the electronic device 600, a relative positioning of the components, such as a display and keypad of the electronic device 600, the sensor assembly 614 may also detect a change in position of the electronic device 600 or a component of the electronic device 600, the presence or absence of a user's contact with the electronic device 600, an orientation or acceleration/deceleration of the electronic device 600, and a change in temperature of the electronic device 600. The sensor assembly 614 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 616 is utilized to facilitate communication between the electronic device 600 and other devices, either in a wired or wireless manner. The electronic device 600 may access a wireless network based on a communication standard, such as WiFi, an operator network (e.g., 2G, 3G, 4G, or 5G), or a combination thereof. In one exemplary embodiment, the communication component 616 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for implementing an image processing method as provided by an embodiment of the invention.
In an exemplary embodiment, a non-transitory computer-readable storage medium is also provided, such as memory 604, including instructions executable by processor 620 of electronic device 600 to perform the above-described method. For example, the non-transitory storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Fig. 13 is a block diagram of an electronic device 700, according to an example embodiment. For example, the electronic device 700 may be provided as a server. Referring to fig. 13, electronic device 700 includes a processing component 722 that further includes one or more processors and memory resources represented by memory 732 for storing instructions, such as application programs, executable by processing component 722. The application programs stored in memory 732 may include one or more modules that each correspond to a set of instructions. In addition, the processing component 722 is configured to execute instructions to perform an image processing method provided by an embodiment of the present invention.
The electronic device 700 may also include a power supply component 726 configured to perform power management of the electronic device 700, a wired or wireless network interface 750 configured to connect the electronic device 700 to a network, and an input output (I/O) interface 758. The electronic device 700 may operate based on an operating system stored in memory 732, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, or the like.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (17)

1. An image processing method, the method comprising:
acquiring an original video stream;
extracting the interest areas of the image frames of the video stream according to a preset extraction rule to obtain a plurality of interest areas;
inputting the contents of a plurality of interest areas extracted from the image frames into a content processing model to obtain a guide weight value for each interest area;
and in the process of playing the video stream, conducting the guiding and broadcasting processing on the image content in the target interest area of which the guiding and broadcasting weight value meets the preset condition.
2. The method of claim 1, further comprising, prior to inputting the content of the plurality of regions of interest extracted from the image frames into a content processing model:
and respectively carrying out size adjustment on the plurality of interest areas, so that the picture sizes of the interest areas are respectively adjusted to the same preset picture size.
3. The method according to claim 1, wherein the extracting the regions of interest from the image frames of the video stream according to the preset extraction rule to obtain a plurality of regions of interest includes:
performing object recognition on the image frame to obtain an object recognition frame;
and responding to the selection operation of the object recognition frame, and selecting the region where part of the object recognition frame is positioned as the region of interest.
4. A method according to claim 3, wherein the number of regions of interest is proportional to the density of the object recognition frames.
5. The method according to claim 1, wherein the extracting the regions of interest from the image frames of the video stream according to the preset extraction rule to obtain a plurality of regions of interest includes:
extracting an interest region of an image frame at the first moment in the video stream according to a preset extraction rule to obtain the interest region of the image frame at the first moment;
and mapping the contour position of the region of interest in the image frame at the first moment to the image frames at all other moments in the video stream to obtain the region of interest of each image frame.
6. The method of claim 1, wherein the targeting weight value of a region of interest is proportional to the targeting priority of image content within the region of interest.
7. The method according to claim 1, wherein the step of performing the multicast processing on the image content in the target interest area where the multicast weight value satisfies the preset condition includes:
sequencing the interest areas according to the broadcasting weight value to obtain an interest area score sequence;
in the interest region scoring sequence, under the condition that the broadcasting guiding weight value of the interest region is determined to be larger than a preset weight threshold value, determining the interest region with the broadcasting guiding weight value larger than the weight threshold value as a target interest region;
and conducting guiding processing on the image content in the target interest area.
8. The method of claim 7, wherein in the region of interest score sequence, if it is determined that a plurality of the regions of interest each have a multicast weight value greater than the weight threshold, the method further comprises:
obtaining a content event type of an image frame in the region of interest and an event priority corresponding to the content event type; the content event type is used for characterizing events occurring in the target region of interest;
and determining the interest area with the highest event type priority as a target interest area in all the interest areas with the guide weight values larger than the weight threshold, and carrying out guide processing on image contents in the target interest area.
9. The method of claim 1, wherein the conducting the image content within the target region of interest comprises:
taking the geometric center of the target region of interest as a target amplification center;
and in the process of playing the video stream, taking the target amplification center as a reference, uniformly amplifying the image content in the target interest area to a preset size in a first duration for display.
10. The method of claim 1, wherein the importing the image content of the target region of interest further comprises:
and highlighting the edge of the target region of interest in the process of playing the video stream.
11. The method according to claim 1, wherein, in the case where it is determined that the multicast weight values of the plurality of regions of interest satisfy a preset condition, performing multicast processing on the image content in the region of interest whose multicast weight value satisfies the preset condition, includes:
generating a plurality of display windows which are suspended on the upper layer of the video stream image content according to a plurality of the interest areas; the window size and the position of the display window are in one-to-one correspondence with the interest areas of which the guide weight values meet the preset conditions;
And respectively conducting guide processing on the image contents of the interest areas through the display windows.
12. The method of claim 1, wherein the obtaining the original video stream comprises:
acquiring a student end video stream taking a student position as a visual angle and a teacher end video stream taking a teacher position as a visual angle;
the region extraction is performed on the image frames of the video stream according to a preset extraction rule to obtain a plurality of interest regions, including:
and carrying out region extraction on the student side video stream to obtain a plurality of interest regions comprising image frames of the student side video stream, and carrying out region extraction on the teacher side video stream to obtain a plurality of interest regions comprising image frames of the teacher side video stream.
13. The method according to claim 12, wherein the step of performing the unicast processing on the image frames in the target interest area according to the weight value during the playing of the video stream includes:
dividing a current playing window into a first display area and a second display area which are mutually independent;
playing the student end video stream through the first display area, and conducting guide processing on the content of the interest area in the student end video stream with the guide weight value meeting the preset condition;
And playing the teacher side video stream through the second display area, and conducting guided broadcasting processing on the content of the interesting area in the teacher side video stream with the guided broadcasting weight value meeting the preset condition.
14. The method according to claim 12, wherein the step of performing the unicast processing on the image frames in the target interest area according to the weight value during the playing of the video stream includes:
in the process of displaying the student end video stream through the first display device, conducting guide processing on the content of the interest area in the student end video stream, wherein the guide weight value of the content meets the preset condition;
and in the process of displaying the teacher side video stream through the second display equipment, conducting guided broadcasting processing on the content of the interesting area in the teacher side video stream with the guided broadcasting weight value meeting the preset condition.
15. An image processing apparatus, characterized in that the apparatus comprises:
the video stream acquisition module is used for acquiring an original video stream;
the interest region acquisition module is used for extracting the interest regions from the image frames of the video stream according to a preset extraction rule to obtain a plurality of interest regions;
the broadcasting guiding weight value determining module is used for inputting the content of the interest areas extracted from the image frames into a content processing model to obtain a broadcasting weight value for each interest area;
And the guiding and broadcasting processing execution module is used for guiding and broadcasting the image content in the target interest area of which the guiding and broadcasting weight value meets the preset condition in the process of playing the video stream.
16. An electronic device, comprising: a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of any one of claims 1 to 14.
17. A readable storage medium, characterized in that instructions in the readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of any one of claims 1 to 14.
CN202310363832.9A 2023-04-06 2023-04-06 Image processing method and device, electronic equipment and readable storage medium Pending CN116527828A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310363832.9A CN116527828A (en) 2023-04-06 2023-04-06 Image processing method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310363832.9A CN116527828A (en) 2023-04-06 2023-04-06 Image processing method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN116527828A true CN116527828A (en) 2023-08-01

Family

ID=87400271

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310363832.9A Pending CN116527828A (en) 2023-04-06 2023-04-06 Image processing method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN116527828A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117998110A (en) * 2024-01-29 2024-05-07 广州开得联软件技术有限公司 Distributed guide method, device, electronic equipment and computer readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117998110A (en) * 2024-01-29 2024-05-07 广州开得联软件技术有限公司 Distributed guide method, device, electronic equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
WO2021232775A1 (en) Video processing method and apparatus, and electronic device and storage medium
US10645332B2 (en) Subtitle displaying method and apparatus
CN112287844B (en) Student situation analysis method and device, electronic device and storage medium
CN109257645B (en) Video cover generation method and device
US9930270B2 (en) Methods and apparatuses for controlling video content displayed to a viewer
JP6165846B2 (en) Selective enhancement of parts of the display based on eye tracking
US20220147741A1 (en) Video cover determining method and device, and storage medium
CN107430629A (en) Point priority of vision content in computer presentation is shown
CN109168062B (en) Video playing display method and device, terminal equipment and storage medium
CN109862380B (en) Video data processing method, device and server, electronic equipment and storage medium
US10083618B2 (en) System and method for crowd sourced multi-media lecture capture, sharing and playback
CN108986117B (en) Video image segmentation method and device
CN110677734A (en) Video synthesis method and device, electronic equipment and storage medium
CN106454411B (en) Station caption processing method and device
US11847818B2 (en) Method for extracting video clip, device for extracting video clip, and storage medium
CN114205635A (en) Live comment display method, device, equipment, program product and medium
CN116527828A (en) Image processing method and device, electronic equipment and readable storage medium
CN107105311B (en) Live broadcasting method and device
CN112866801A (en) Video cover determining method and device, electronic equipment and storage medium
CN105635573B (en) Camera visual angle regulating method and device
CN110636377A (en) Video processing method, device, storage medium, terminal and server
EP3799415A2 (en) Method and device for processing videos, and medium
CN108769780B (en) Advertisement playing method and device
CN112541402A (en) Data processing method and device and electronic equipment
CN111144255B (en) Analysis method and device for non-language behaviors of teacher

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination