CN112784070A - User portrait method based on big data - Google Patents
User portrait method based on big data Download PDFInfo
- Publication number
- CN112784070A CN112784070A CN202011623573.1A CN202011623573A CN112784070A CN 112784070 A CN112784070 A CN 112784070A CN 202011623573 A CN202011623573 A CN 202011623573A CN 112784070 A CN112784070 A CN 112784070A
- Authority
- CN
- China
- Prior art keywords
- user
- data
- portrait
- big
- final
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 230000004927 fusion Effects 0.000 claims abstract description 13
- 238000012545 processing Methods 0.000 claims abstract description 12
- 238000007781 pre-processing Methods 0.000 claims abstract description 8
- 238000007500 overflow downdraw method Methods 0.000 claims description 8
- 230000001502 supplementing effect Effects 0.000 claims description 7
- 230000010354 integration Effects 0.000 claims description 5
- 230000007246 mechanism Effects 0.000 claims description 5
- 238000012549 training Methods 0.000 claims description 5
- 238000004140 cleaning Methods 0.000 claims description 4
- 238000013501 data transformation Methods 0.000 claims description 4
- 230000009467 reduction Effects 0.000 claims description 4
- 238000003384 imaging method Methods 0.000 claims description 3
- 230000000153 supplemental effect Effects 0.000 claims description 3
- 238000007405 data analysis Methods 0.000 abstract description 2
- 230000006399 behavior Effects 0.000 description 16
- 230000002159 abnormal effect Effects 0.000 description 12
- 230000008451 emotion Effects 0.000 description 10
- 230000002996 emotional effect Effects 0.000 description 7
- 230000000694 effects Effects 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 3
- 239000013589 supplement Substances 0.000 description 3
- 238000007726 management method Methods 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 239000000047 product Substances 0.000 description 2
- 238000010223 real-time analysis Methods 0.000 description 2
- 238000007619 statistical method Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008909 emotion recognition Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/43—Querying
- G06F16/435—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Abstract
The invention belongs to the technical field of internet data analysis, and particularly relates to a user portrait method based on big data, which comprises the following steps: acquiring user data, wherein the user data comprises information data, behavior data and other data; a data processing step, namely transmitting the user data to a data processing platform, preprocessing the user data and then integrating the multi-source data; a basic portrait step, namely building a basic user portrait according to the preprocessed information data; a supplementary portrait step, namely obtaining a supplementary user portrait by using a preset model according to the integrated user data; and a final portrait drawing step, namely performing model fusion on the basic user portrait and the supplementary portrait according to a preset rule to obtain a final user portrait. By using the method, the obtained final user portrait is more comprehensive and objective, and the arrangement management system of the user can be more specifically adjusted under the support of the final user portrait, so that the use experience of the user is improved.
Description
Technical Field
The invention belongs to the technical field of internet data analysis, and particularly relates to a user portrait method based on big data.
Background
IPTV has been widely developed in recent years as a system for distributing television information media over a broadband network. By means of the characteristics of network communication, the IPTV can provide more types of services for users, and provides time-shifted television, video-on-demand television, music/game playing, interaction and the like besides the traditional live television. Film and television programs account for a significant proportion of the total service network, and the types and range of users that can be covered are also the broadest.
Although operators are in charge of huge IPTV user traffic, the service development still faces many problems from the user traffic to the transition period of fine operation, for example: how to mine the user value and improve the ordering and paying proportion; how to operate the user to continuously order, even to order multiple products; how to improve the large-screen side viewing experience and improve the user liveness; how to retrieve, activate drowsy lost users, etc.
In order to solve the problems, besides ensuring sufficient resource of works, the use experience of users needs to be improved, and the use experience of the users needs to be improved due to different use preferences of each user, so that the users need to perform targeted content sequencing according to the preference of the users, and the user portrait is related.
At present, the basic mode of user portrayal in the field is to take each user behavior and the content serving as a behavior object as an isolated data point, and to summarize all user behaviors and mass data points formed by the user behaviors and the user objects, and to find out a statistical distribution rule from the data points.
The mode can know the use habits of the users to a certain extent according to the statistical distribution rule. However, such statistical approaches are too simplistic and too aggressive to utilize the user data. The phenomenon interpretation of the public population by using the method has certain referential property, but the user portrayal of a single user is difficult to ensure the accuracy.
Therefore, the existing IPTV products and platforms cannot meet the requirement of fine operation, and it is difficult to obtain an effective user profile for the user.
Disclosure of Invention
The invention aims to provide a user portrait method based on big data, which can obtain effective user portrait aiming at individual users.
The basic scheme provided by the invention is as follows:
a big data based user portrayal method, comprising:
acquiring user data, wherein the user data comprises information data, behavior data and other data;
a data processing step, namely transmitting the user data to a data processing platform, preprocessing the user data and then integrating the multi-source data;
a basic portrait step, namely building a basic user portrait according to the preprocessed information data;
a supplementary portrait step, namely obtaining a supplementary user portrait by using a preset model according to the integrated user data;
and a final portrait drawing step, namely performing model fusion on the basic user portrait and the supplementary portrait according to a preset rule to obtain a final user portrait.
Has the advantages that:
compared with the existing user analysis, the prior art only carries out user preference statistical analysis on the user basic data, and by using the method, the multi-source information about the user can be fully utilized.
In addition, by using the method, the basic user image method and the front edge processing method (the supplementary user image) are integrated, so that the user basic data can still play a role (the basic user image) of the obtained final user image, the behavior data and other data of the user can also play a role (the supplementary user image), the final user image is obtained in a model fusion mode, and the effects of information supplement and complementation can be achieved through the model fusion. Compared with the conventional user portrait, the final user portrait of the method is more comprehensive and objective, and the arrangement management system of the user can be adjusted more pertinently under the support of the final user portrait, so that the use experience of the user is improved.
Therefore, the method can obtain the user image aiming at the individual user effectively.
Further, in the step of supplementing the portrait, when the preset model is trained, an attention mechanism is introduced for training.
After an attention mechanism is introduced, the model at the training position can obtain a result more accurately and quickly when the user image is supplemented.
Furthermore, in the final image drawing step, a preset rule is a feature fusion method.
Compared with other fusion rules, the feature fusion method can solve the problem of data loss frequently encountered in actual fusion, because not all users have complete data, and the feature fusion method can ensure that the applicable final user portrait can be obtained under various conditions.
Further, the supplemental user representation is plural.
Due to the diversification of the user data, each item of user data can play its role through a plurality of supplementary user figures. Meanwhile, a plurality of supplementary user figures can also enable the refinement degree of the final user figures to be higher and the accuracy to be better.
Further, the behavior data includes click, view, pause, and exit.
Further, the other data includes a viewing date, a viewing time length, and a viewing frequency.
Further, in the data processing step, the preprocessing comprises data cleaning, data integration, data transformation and data reduction.
Is convenient for subsequent use.
Further, the user data is multimodal data.
Making the user data that can be collected more and more complete.
Further, the method also comprises a model adjusting step, wherein the model for building the supplementary user portrait is adjusted.
The staff can adjust the model for building and supplementing the user portrait according to the using effect of the final user portrait, so that the method is more applicable.
Further, in the adjusting step, the model for building the supplementary user portrait can be adjusted only after the identity authentication is passed.
It is possible to prevent the loss caused by the malicious operation of an irrelevant person.
Drawings
Fig. 1 is a flowchart of a first embodiment of a big data-based user imaging method according to the present invention.
Detailed Description
The following is further detailed by way of specific embodiments:
example one
As shown in fig. 1, a big data-based user portrayal method includes:
acquiring user data, wherein the user data comprises information data, behavior data and other data; in this implementation, the behavior data includes click, view, pause, and exit. Other data includes viewing date, viewing time, viewing duration, and viewing frequency.
A data processing step, namely transmitting the user data to a data processing platform, preprocessing the user data and then integrating the multi-source data; wherein the preprocessing comprises data cleaning, data integration, data transformation and data reduction.
A basic portrait step, namely building a basic user portrait according to the preprocessed information data;
a supplementary portrait step, namely obtaining a supplementary user portrait by using a preset model according to the integrated user data; wherein the supplemental user representation is plural. When the preset model is trained, an attention mechanism is introduced for training, and after the attention mechanism is introduced, the model at the training position can obtain a result more accurately and quickly when a user image is supplemented.
And a final portrait drawing step, namely performing model fusion on the basic user portrait and the supplementary portrait according to a preset rule to obtain a final user portrait. In this embodiment, the preset rule is a feature fusion method.
The specific implementation process is as follows:
after information data, behavior data and other data of a user are collected, the user data are transmitted to a data processing platform, and the data processing platform carries out preprocessing (data cleaning, data integration, data transformation, data reduction and the like) on the user data for subsequent use. And performing multi-source data integration.
And then, performing basic user portrait according to the preprocessed basic data, and obtaining a supplementary user portrait by using a preset model according to the integrated user data. In this embodiment, there are a plurality of user images. Due to the diversification of the user data, each item of user data can play its role through a plurality of supplementary user figures. Meanwhile, a plurality of supplementary user figures can also enable the refinement degree of the final user figures to be higher and the accuracy to be better.
Then, according to the preset rule, model fusion is carried out on the basic user portrait and the supplementary portrait to obtain the final user portrait. The preset rule in this embodiment is a feature fusion method. Compared with other fusion rules, the feature fusion method can solve the problem of data loss frequently encountered in actual fusion, because not all users have complete data, and the feature fusion method can ensure that the applicable final user portrait can be obtained under various conditions.
Compared with the existing user analysis, the prior art only carries out user preference statistical analysis on the user basic data, and by using the method, the multi-source information about the user can be fully utilized.
In addition, by using the method, the basic user portrait method and the frontier processing method are integrated, so that the basic user portrait can still play the role of the basic user data (basic user portrait), the behavior data and other data of the user can also play the role (supplement of the user portrait), the final user portrait is obtained in a model fusion mode, and the effects of information supplement and information complementation can be achieved through the model fusion. Compared with the conventional user portrait, the final user portrait of the method is more comprehensive and objective, and the arrangement management system of the user can be adjusted more pertinently under the support of the final user portrait, so that the use experience of the user is improved.
By using the method, the user portrait aiming at the individual user can be effectively obtained.
Example two
Different from the first embodiment, the method also comprises a model adjusting step, wherein the model for building and supplementing the user portrait is adjusted; in addition, in this embodiment, the model for building the supplementary user portrait can be adjusted only after the authentication is passed. Therefore, the staff can adjust the model for building and supplementing the user portrait according to the using effect of the final user portrait, so that the method is more applicable. And simultaneously, the loss caused by malicious operation of irrelevant personnel can be prevented.
EXAMPLE III
Different from the first embodiment, in the present embodiment, the acquired user data further includes a voice of the user. Specifically, the voice of the user can be collected by installing a sound pickup on the intelligent remote controller. This is the prior art and will not be described herein. In this embodiment, the behavior data includes programs viewed.
Also comprises a storage step and a real-time analysis step;
a storage step of storing an end user representation of a user in a storage unit;
a real-time analysis step, namely performing user identity recognition and emotion recognition according to voice; when the recognition result is that the user is a single person and is not a new user, calling the final user portrait of the user, and generating recommended content by combining the recognized emotion and the media asset data; when the recognition result indicates that the user is a plurality of people and no new user exists, analyzing whether an emotional abnormal user exists; if no emotional abnormal user exists, judging whether the storage unit stores the priority ranking of the corresponding user, if not, calling the final user images of all users by the content recommending unit, generating recommended content according to the average distribution principle by combining the media resource data, carrying out priority ranking on the user by the user analyzing unit according to the behavior data and the final user images, and storing the priority of the user by the storage unit; if no emotional abnormal user exists and the priority ranking of the corresponding user is stored in the storage unit, the content recommending unit calls the final user portrait of the user with the highest priority ranking and outputs the corresponding recommended content by combining the media resource data;
if there is no emotional abnormal user and the priority sequence of the corresponding user is stored in the storage unit, but the program watched in the behavior data is not consistent with the recommended content, the user analysis unit records the current time, calls the final user figures of all users, analyzes the current leading user, and updates the priority sequence of the user by combining the current time and the program, wherein the leading user has the highest priority when the corresponding program exists in the media data at the time point;
if the users with abnormal emotions exist, the content recommending unit calls the final user portrait of the users with abnormal emotions and generates recommended content by combining the recognized emotions and the media asset data.
The specific implementation process is as follows:
when only one user is not a new user, the final user portrait of the user is called, and the recommended content is generated by combining the recognized emotion and the media asset data, so that the user has good use experience.
When a plurality of users watch the images simultaneously, no emotional abnormal user exists, and the priority sequence of the corresponding user is stored in the storage unit, the content recommending unit calls the final user portrait of the user with the highest priority sequence, and outputs the corresponding recommended content by combining the media asset data. Due to the fact that multiple persons living together usually have priorities for the watching rights of the IPTV, usually, the person with the highest priority on the scene can have a decision right, through the arrangement, content pushing is directly carried out on the user with the highest priority, time for blindly searching for the program can be saved, and meanwhile, due to the fact that content pushing is not carried out according to the final user portrait of other users, the pain that the other users cannot see the program wanted by themselves in the process of selecting the program can be reduced.
When a plurality of users watch the content simultaneously, no emotional abnormal user exists, and the storage unit does not store the priority sequence of the corresponding user, the content recommending unit calls the pictures of all the users, generates recommended content according to the average distribution principle by combining the media resource data, the user analyzing unit also performs the priority sequence on the users according to the behavior data and the final user picture, and the storage unit is also used for storing the priorities of the users. By means of the method, the priority is gradually generated, and the overall experience of a plurality of subsequent people in watching can be improved.
When a plurality of users watch the program simultaneously, no emotional abnormal user exists, the priority sequence of the corresponding user is stored in the storage unit, and the program watched in the behavior data is not consistent with the recommended content. It is noted that a user with a priority other than the highest has a highest right to watch the program in the current time slot, for example, a young person may watch a football game at friday night. Therefore, the user analysis unit records the current time, calls the final user portraits of all users, analyzes the current leading user, and updates the user priority sequence by combining the current time and the program, wherein the leading user has the highest priority when the corresponding program exists in the media asset data at the time point. Therefore, when the corresponding program exists in the media asset data at the time point next time, the content recommending unit can directly push the program, and the user experience can be improved.
When a plurality of users are watching simultaneously and there are users with abnormal emotions, in order to take care of the emotion of the users, people usually watch some favorite programs of the users. At this time, the content recommending unit calls the final user portrait of the user with abnormal emotion and generates recommended content by combining the recognized emotion and the media asset data. Therefore, programs preferred by the abnormal user can be accurately pushed, and the purpose of taking care of the emotion of the user is achieved.
The foregoing is merely an example of the present invention, and common general knowledge in the field of known specific structures and characteristics is not described herein in any greater extent than that known in the art at the filing date or prior to the priority date of the application, so that those skilled in the art can now appreciate that all of the above-described techniques in this field and have the ability to apply routine experimentation before this date can be combined with one or more of the present teachings to complete and implement the present invention, and that certain typical known structures or known methods do not pose any impediments to the implementation of the present invention by those skilled in the art. It should be noted that, for those skilled in the art, without departing from the structure of the present invention, several changes and modifications can be made, which should also be regarded as the protection scope of the present invention, and these will not affect the effect of the implementation of the present invention and the practicability of the patent. The scope of the claims of the present application shall be determined by the contents of the claims, and the description of the embodiments and the like in the specification shall be used to explain the contents of the claims.
Claims (10)
1. A big data-based user imaging method is characterized by comprising the following steps:
acquiring user data, wherein the user data comprises information data, behavior data and other data;
a data processing step, namely transmitting the user data to a data processing platform, preprocessing the user data and then integrating the multi-source data;
a basic portrait step, namely building a basic user portrait according to the preprocessed information data;
a supplementary portrait step, namely obtaining a supplementary user portrait by using a preset model according to the integrated user data;
and a final portrait drawing step, namely performing model fusion on the basic user portrait and the supplementary portrait according to a preset rule to obtain a final user portrait.
2. The big-data based user portrayal method according to claim 1, wherein: in the step of supplementing the portrait, when a preset model is trained, an attention mechanism is introduced for training.
3. The big-data based user portrayal method according to claim 1, wherein: in the final image drawing step, a preset rule is a feature fusion method.
4. The big-data based user portrayal method according to claim 1, wherein: the supplemental user representation is plural.
5. The big-data based user portrayal method according to claim 1, wherein: behavior data includes click, view, pause, and exit.
6. The big-data based user imaging method according to claim 5, wherein: other data includes viewing date, viewing time, viewing duration, and viewing frequency.
7. The big-data based user portrayal method according to claim 1, wherein: in the data processing step, the preprocessing comprises data cleaning, data integration, data transformation and data reduction.
8. The big-data based user portrayal method according to claim 1, wherein: the user data is multimodal data.
9. The big-data based user portrayal method according to claim 1, wherein: the method also comprises a model adjusting step, wherein the model for building and supplementing the user portrait is adjusted.
10. The big-data based user-portrayal method of claim 9, wherein: in the adjusting step, after the identity authentication is passed, the model for building and supplementing the user portrait can be adjusted.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011623573.1A CN112784070A (en) | 2020-12-31 | 2020-12-31 | User portrait method based on big data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011623573.1A CN112784070A (en) | 2020-12-31 | 2020-12-31 | User portrait method based on big data |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112784070A true CN112784070A (en) | 2021-05-11 |
Family
ID=75754345
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011623573.1A Pending CN112784070A (en) | 2020-12-31 | 2020-12-31 | User portrait method based on big data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112784070A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115834940A (en) * | 2022-11-14 | 2023-03-21 | 浪潮通信信息系统有限公司 | IPTV/OTT end-to-end data reverse acquisition analysis method and system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107124653A (en) * | 2017-05-16 | 2017-09-01 | 四川长虹电器股份有限公司 | The construction method of TV user portrait |
CN108021929A (en) * | 2017-11-16 | 2018-05-11 | 华南理工大学 | Mobile terminal electric business user based on big data, which draws a portrait, to establish and analysis method and system |
CN108805383A (en) * | 2018-03-20 | 2018-11-13 | 东华大学 | A kind of user's portrait platform and application for washing shield big data based on clothes |
CN109063059A (en) * | 2018-07-20 | 2018-12-21 | 腾讯科技(深圳)有限公司 | User behaviors log processing method, device and electronic equipment |
CN109684330A (en) * | 2018-12-17 | 2019-04-26 | 深圳市华云中盛科技有限公司 | User's portrait base construction method, device, computer equipment and storage medium |
CN112035742A (en) * | 2020-08-28 | 2020-12-04 | 康键信息技术(深圳)有限公司 | User portrait generation method, device, equipment and storage medium |
-
2020
- 2020-12-31 CN CN202011623573.1A patent/CN112784070A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107124653A (en) * | 2017-05-16 | 2017-09-01 | 四川长虹电器股份有限公司 | The construction method of TV user portrait |
CN108021929A (en) * | 2017-11-16 | 2018-05-11 | 华南理工大学 | Mobile terminal electric business user based on big data, which draws a portrait, to establish and analysis method and system |
CN108805383A (en) * | 2018-03-20 | 2018-11-13 | 东华大学 | A kind of user's portrait platform and application for washing shield big data based on clothes |
CN109063059A (en) * | 2018-07-20 | 2018-12-21 | 腾讯科技(深圳)有限公司 | User behaviors log processing method, device and electronic equipment |
CN109684330A (en) * | 2018-12-17 | 2019-04-26 | 深圳市华云中盛科技有限公司 | User's portrait base construction method, device, computer equipment and storage medium |
CN112035742A (en) * | 2020-08-28 | 2020-12-04 | 康键信息技术(深圳)有限公司 | User portrait generation method, device, equipment and storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115834940A (en) * | 2022-11-14 | 2023-03-21 | 浪潮通信信息系统有限公司 | IPTV/OTT end-to-end data reverse acquisition analysis method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11272248B2 (en) | Methods for identifying video segments and displaying contextually targeted content on a connected television | |
WO2022028126A1 (en) | Live streaming processing method and apparatus, and electronic device and computer readable storage medium | |
US10820048B2 (en) | Methods for identifying video segments and displaying contextually targeted content on a connected television | |
JP5795580B2 (en) | Estimating and displaying social interests in time-based media | |
US9560411B2 (en) | Method and apparatus for generating meta data of content | |
US20040073919A1 (en) | Commercial recommender | |
US20100158391A1 (en) | Identification and transfer of a media object segment from one communications network to another | |
CN103686235B (en) | System and method for correlating audio and/or images presented to a user with facial characteristics and expressions of the user | |
JP2006012171A (en) | System and method for using biometrics to manage review | |
US20140172579A1 (en) | Systems and methods for monitoring users viewing media assets | |
WO2011090541A2 (en) | Methods for displaying contextually targeted content on a connected television | |
WO2015196757A1 (en) | Television program recommending method and server | |
WO2017166472A1 (en) | Advertisement data matching method, device, and system | |
CN110415023B (en) | Elevator advertisement recommendation method, device, equipment and storage medium | |
JP2007215046A (en) | Information processor, information processing method, information processing program, and recording medium | |
CN111417024A (en) | Scene recognition-based program recommendation method, system and storage medium | |
CN112784069B (en) | IPTV content intelligent recommendation system and method | |
CN112784070A (en) | User portrait method based on big data | |
JP2018032252A (en) | Viewing user log accumulation system, viewing user log accumulation server, and viewing user log accumulation method | |
KR20200049192A (en) | Providing Method for virtual advertisement and service device supporting the same | |
JP2003319421A (en) | Image management method, device, image management program and recording medium with the program recorded thereon | |
JP6567715B2 (en) | Information processing apparatus, information processing method, and program | |
JP2017011438A (en) | Information processing apparatus, program, information processing system, and receiving apparatus | |
KR20160067685A (en) | Method, server and system for providing video scene collection | |
US11949965B1 (en) | Media system with presentation area data analysis and segment insertion feature |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210511 |