WO2020253372A1 - 基于大数据分析的信息推送方法、装置、设备及存储介质 - Google Patents
基于大数据分析的信息推送方法、装置、设备及存储介质 Download PDFInfo
- Publication number
- WO2020253372A1 WO2020253372A1 PCT/CN2020/086475 CN2020086475W WO2020253372A1 WO 2020253372 A1 WO2020253372 A1 WO 2020253372A1 CN 2020086475 W CN2020086475 W CN 2020086475W WO 2020253372 A1 WO2020253372 A1 WO 2020253372A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- user
- facial
- facial feature
- video
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/735—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Definitions
- This application relates to the technical field of big data analysis, and in particular to an information push method, device, equipment and storage medium based on big data analysis.
- this online promotion method can greatly save manpower and material resources and save costs for enterprises.
- the inventor realizes that the existing online promotion is usually a large-scale push, that is, to facilitate promotion, the information pushed to all users at the same time is the same, and it does not consider whether the pushed information is suitable for the current actual needs of users. Therefore, the accuracy and effectiveness of information push cannot be guaranteed at all.
- the main purpose of this application is to provide an information push method, device, device, and storage medium based on big data analysis, aiming to solve the technical problem of low information push accuracy in the prior art and the pushed information not suitable for the actual needs of users.
- this application provides an information push method based on big data analysis, and the method includes the following steps:
- an information push model suitable for the user is selected from a pre-built information push model management library, and the information push model is used to push information for the user.
- the step of collecting the facial expressions of users watching the film and television works by time intervals includes:
- the face of the user in each video segment is analyzed to obtain the facial expression of the user watching the film and television work in each time period.
- this application also proposes an information push device based on big data analysis, the device including:
- the collection module is used to collect the streaming media information of the film and television work currently played on the user interface and the facial expressions of the user watching the film and television work in time intervals;
- the analysis module is configured to analyze the streaming media information in each time period to obtain the user's first emotion change trend, and analyze the facial expressions in each time period to obtain the user's second emotion change trend;
- a determining module configured to determine the actual emotion information of the user according to the first emotion change trend and the second emotion change trend;
- the push module is configured to select an information push model suitable for the user from a pre-built information push model management library according to the actual emotional information, and use the information push model to push information for the user.
- this application also proposes an information push device based on big data analysis, the device including: a memory, a processor, and a big data-based device that is stored on the memory and can run on the processor.
- An information push program for data analysis which is configured to implement the steps of the information push method based on big data analysis as described above.
- this application also proposes a storage medium that stores an information push program based on big data analysis.
- the information push program based on big data analysis is executed by a processor, the implementation is as described above. The steps of the information push method based on big data analysis.
- the information push solution based on big data analysis provided by this application when pushing information for users, predicts the user’s first emotional change trend based on the film and television works that the user watches, and according to the facial expressions of the user when watching the film and television works Determine the user's second emotion change trend, and then determine the user's actual emotion information according to the first emotion change trend and the second emotion change trend, and finally select the appropriate information from the pre-built information push model management library based on the determined actual emotion information
- the information push model of the user and adopts the information push model to push information that conforms to the current actual mood for the user, so as to ensure that the information pushed to the user is the information that meets the current actual needs of the user, which greatly improves the information push Accuracy and effectiveness.
- FIG. 1 is a schematic structural diagram of an information push device based on big data analysis in a hardware operating environment involved in a solution of an embodiment of the present application;
- FIG. 2 is a schematic flowchart of a first embodiment of an information push method based on big data analysis according to this application;
- FIG. 3 is a schematic flowchart of a second embodiment of an information push method based on big data analysis according to this application;
- FIG. 4 is a schematic diagram of processing video format information in the second embodiment of the information pushing method based on big data analysis of this application;
- FIG. 5 is a schematic diagram of processing video format information in the second embodiment of the information pushing method based on big data analysis of this application;
- FIG. 6 is a schematic diagram of processing video format information in the second embodiment of the information pushing method based on big data analysis of this application;
- FIG. 7 is a structural block diagram of a first embodiment of an information push device based on big data analysis in this application.
- FIG. 1 is a schematic structural diagram of an information pushing device based on big data analysis in a hardware operating environment involved in a solution of an embodiment of the application.
- the information pushing device based on big data analysis may include a processor 1001, such as a central processing unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005.
- the communication bus 1002 is used to implement connection and communication between these components.
- the user interface 1003 may include a display screen (Display) and an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
- the network interface 1004 may optionally include a standard wired interface and a wireless interface (such as a wireless fidelity (WI-FI) interface).
- WI-FI wireless fidelity
- the memory 1005 may be a high-speed random access memory (Random Access Memory, RAM) memory, or a stable non-volatile memory (Non-Volatile Memory, NVM), such as a disk memory.
- RAM Random Access Memory
- NVM Non-Volatile Memory
- the memory 1005 may also be a storage device independent of the foregoing processor 1001.
- FIG. 1 does not constitute a limitation on the information push device based on big data analysis, and may include more or less components than shown in the figure, or combine certain components, or different The layout of the components.
- the memory 1005 as a storage medium may include an operating system, a network communication module, a user interface module, and an information push program based on big data analysis.
- the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with users; this application is based on big data analysis for information push
- the processor 1001 and the memory 1005 in the device may be set in an information pushing device based on big data analysis, and the information pushing device based on big data analysis calls the information pushing program based on big data analysis stored in the memory 1005 through the processor 1001 , And execute the information push method based on big data analysis provided in the embodiments of this application.
- FIG. 2 is a schematic flowchart of a first embodiment of an information pushing method based on big data analysis of this application.
- the information push method based on big data analysis includes the following steps:
- Step S10 Collect the streaming media information of the movie and TV work currently played on the user interface and the facial expressions of the user watching the movie and TV work in time intervals.
- the execution subject involved in this embodiment may only be a terminal device used to play film and television works, such as a personal computer, a tablet computer, a smart phone, etc.; it may also be a terminal device operated by the user and deployed with this implementation.
- a terminal device used to play film and television works such as a personal computer, a tablet computer, a smart phone, etc.
- it may also be a terminal device operated by the user and deployed with this implementation.
- the server interaction of the information push program based on big data analysis given in the example is completed, and the specific setting method can be set by those skilled in the art as needed, and there is no limitation here.
- the film and television works mentioned in this embodiment mainly include audio and video works such as television, film, and music.
- this embodiment takes the current user interface as an example that the movie and TV work is a movie, and briefly describes the content involved in the streaming media information and the acquisition method of each content.
- the collected streaming media information may roughly include the facial expressions, lines, and plot information of the main characters in the movie scene currently being played, which will not be listed here. No restrictions.
- the facial expressions of the main characters in the above-mentioned movie scenes may be specifically extracted from the streaming media information in the video format based on face recognition technology;
- the plot information may be based on the identification information of the current film and television works , Obtain the corresponding synopsis, or determine it through semantic analysis of the lines;
- the line information may be extracted from the streaming media information in the audio format based on voice recognition technology.
- the implementation of collecting the user's facial expressions can be specifically extracted from the recorded video information of the film and television works corresponding to the streaming media information of the user watching each time period, that is, extracting the user's facial expressions
- the same method can be used to extract facial expressions of actors in streaming media information.
- a specific implementation method for extracting facial expressions is given below. The general steps are as follows (here to extract the user’s Facial expression as an example):
- the processor of the terminal device After receiving an instruction to play the selected film and television work triggered by the user on the user interface of the terminal device, the processor of the terminal device obtains the address of the film and television work according to the instruction, and analyzes and plays the film and television work. At the same time, the camera of the terminal device is controlled to be turned on, so that the video information including the face of the user can be collected in real time during the process of playing the film and television work.
- the reason why the video information is intercepted according to the streaming media information of each time period to obtain the video fragments of the streaming media information of each time period is to ensure that the first mood change and the second mood change determined subsequently The time axis of the mood change is matched, thereby ensuring that the actual mood information of the user determined according to the first mood change and the second mood change is more suitable for the actual situation.
- the operation of determining the facial expression of the user can be achieved through the following steps:
- (3-1) Traverse each video segment, and extract the facial feature points of the user from the current video segment traversed according to the facial feature detection model obtained by pre-training.
- the face image of the user may be identified from the current video segment traversed according to the face detection model obtained by pre-training. Then, according to the facial feature detection model obtained by pre-training, the facial feature points of the user, such as the feature points of the eyes, eyebrows, mouth, and jaw, are extracted from the face image.
- the aforementioned face detection model and face feature detection model it can be specifically obtained by training a training model built on sample data based on a convolutional neural network algorithm.
- the face detection model and face feature detection model obtained by training can more accurately extract the face images and facial feature points in each video segment.
- the user's face is divided into facial regions to obtain facial feature regions corresponding to the facial feature points.
- each facial feature point is located in a facial feature region.
- facial feature points of the user are located in a facial feature area, for example, all facial feature points of the left eyebrow are located in the same facial feature area, and all facial feature points of the right eyebrow are located in the same facial feature area .
- the speed vector mentioned here is not only used to indicate the movement speed information of the corresponding facial feature point, but also used to indicate the movement direction information of the facial feature point.
- the method of determining the velocity vector of the facial feature points in each facial region based on the optical flow method may be by traversing each facial feature region, and detecting that the facial feature points in the current facial feature region traversed are between two adjacent ones. Pixel change intensity between image frames; and then according to the pixel change intensity, infer the velocity vector of the facial feature point in the current facial feature region.
- the position coordinates of a certain facial feature point is P(x,y,t)
- the intensity is I(x,y,t)
- ⁇ x, ⁇ y, ⁇ t between two frames.
- x is the abscissa
- y is the ordinate
- t is the optical value.
- V x and V y are the components of x and y respectively, and the velocity or optical flow of I(x, y, t). Therefore, between two frames at a distance of ⁇ t, the optical value t of the above-mentioned feature point is expressed as a two-dimensional velocity vector
- (3-4) Determine the facial expression of the user in the current video segment traversed according to the velocity vector of each facial feature point.
- the facial expression of the user in the video can usually be considered For sleepiness.
- the reason why the streaming media information of the film and television works played on the current user interface and the facial expressions of the users watching the film and television works are collected in time intervals is to be able to accurately determine The user’s emotional information to ensure the accuracy of the information push operation.
- Step S20 based on the big data analysis technology, analyze the streaming media information in each time period to obtain the user's first emotion change trend, and analyze the facial expressions in each time period to obtain the user's second emotion change trend.
- the first emotion change trend mentioned here is essentially the user’s current possible emotion estimated based on the film and television works the user chooses to watch. For example, the user has been watching a tragedy for a certain period of time, and the user can be initially identified At this stage, I am in a bad mood and low mood.
- the second emotion change trend is determined according to the real facial expressions made by the user while watching the film and television work.
- the characteristic point of the upper eyelid that marks the inner corner of the eye moves downward, the upper eyelid of the inner corner of the eye is lowered, and the characteristic point that marks the mouth moves outward, which causes the mouth to open. It can usually be considered that the user making the current facial expression is in Drowsiness and depression.
- the characteristic points of the lip corners are marked back and the cheeks are moved back and above the cheeks, the corners of the lips are pulled back and raised, and the characteristic points that mark the mouth are moved outwards, which leads to the opening of the mouth, which can usually be considered as making the current facial expression Of users are excited and emotional.
- the above-mentioned first mood change trend and the second mood change trend are used to reflect the user's mood change in the current time period, such as changing from happy to unhappy; from unhappy to happy, or always in a happy state Or unhappy state, etc.
- Step S30 Determine actual emotion information of the user according to the first emotion change trend and the second emotion change trend.
- the first mood change trend and the second mood change trend both indicate that the user has been happier in the current time period, it can be determined that the user's actual mood information is happy;
- the first mood change trend indicates that the user may be in a bad mood during the current time period
- the second mood change trend indicates that the user is happy in the current time period
- the user's actual mood can be determined The emotional information is unhappy.
- the first mood change trend indicates that the user may be in a bad mood during the current time period
- the second mood change trend indicates that the user is happy in the current time period
- Step S40 According to the actual emotional information, select an information push model suitable for the user from a pre-built information push model management database, and use the information push model to push information for the user.
- the information pushed for the user may involve music, film and television works, catering, entertainment information, etc. that can improve the mood of the user, or may be film and television works that have a high degree of matching with the currently played film and television works, etc., specifically pushed
- the content can be determined according to the actual emotional information and a pre-built information push model.
- step S40 in order to provide in step S40: according to the actual emotional information, select an information push model suitable for the user from a pre-built information push model management library, and then The operation of using the selected information push model to push information for the user can be performed smoothly. Before performing the above operation, the information push model needs to be constructed first.
- the constructed information push model includes the information push model selected in the above steps that is suitable for the user, and also includes the information push model for other user emotions, that is, the information push management library.
- this implementation provides a specific construction process, which is roughly as follows:
- the aforementioned network address may specifically be the Uniform Resource Locator (URL) of the webpage where the training data to be collected is located, or it may be the URL of the training data in any big data platform.
- URL Uniform Resource Locator
- Database storage addresses are not listed here, and there are no restrictions on this.
- the web crawler used to obtain training data can be any one or several of many web crawlers, such as general web crawlers, focused web crawlers, incremental web crawlers, deep web crawlers, etc.
- web crawlers such as general web crawlers, focused web crawlers, incremental web crawlers, deep web crawlers, etc.
- general web crawlers such as general web crawlers, focused web crawlers, incremental web crawlers, deep web crawlers, etc.
- Those skilled in the art can choose according to their needs, and this application does not impose any restrictions on this.
- the training data can be added to the pre-built training data.
- the buffer pool such as Kafka message queue.
- Kafka message queues are used to cache the training data. Avoid the accumulation of a large amount of training data as much as possible, thereby effectively preventing thread blocking.
- Kafka is an open source stream processing platform developed by the Apache Software Foundation, its usage is relatively mature. Those skilled in the art can find relevant documents and implement it by themselves in specific implementations. I will not repeat them here. .
- the machine learning algorithm is predetermined as a decision tree algorithm.
- each kind of user emotion information is regarded as a node, and a question is asked at each node;
- the training data is divided into two categories, and then continue to ask questions, and loop in turn until all existing user emotion information is classified, and the learning path for constructing the training model can be obtained.
- the above-mentioned learning objective is used to detect whether the training result is greatly close to the real data in the subsequent training process, that is, after the training model completes a certain training, after the training data is input into the training model, the output The training result is close to the learning goal.
- training can also be carried out based on the following data, such as: gender, age, occupation, platform usage habits, hobbies, family members, community grade, consumption ability, personality, etc., here I will not list them one by one, and do not make any restrictions on this.
- the information push model can also be adjusted regularly.
- the information push method when pushing information for users, estimates the user’s first emotional change trend based on the film and television works watched by the user.
- the facial expressions when watching the film and television works determine the user’s second emotion change trend, and then according to the first emotion change trend and the second emotion change trend, determine the user’s actual emotion information, and finally based on the determined actual emotion information, build from the
- the information push model management library selects an information push model suitable for the user, and uses the information push model to push information that meets the current actual mood for the user, so as to ensure that the information pushed to the user is suitable for the user’s current actual needs The information, which greatly improves the accuracy and effectiveness of information push.
- FIG. 3 is a schematic flowchart of a second embodiment of an information push method based on big data analysis according to this application.
- the information pushed to the user may be text information, voice information, or video information. Therefore, in order to avoid the problem that the existing video format information has a single picture and cannot attract users during the playback process, in this embodiment, when it is determined that the information pushed to the user is a video format, the video format information is processed, and then The information in the video format is displayed on the user interface with a naked-eye stereoscopic effect, that is, naked-eye 3D (3Dimensions), thereby attracting users to view.
- a naked-eye stereoscopic effect that is, naked-eye 3D (3Dimensions
- the method for pushing information based on big data analysis in this embodiment after the step S40 further includes:
- Step S50 Monitoring whether a viewing instruction triggered by the user to view the information of the video format is received.
- step S60 is performed; otherwise, step S50 is continued to be performed until it is determined that the user-triggered viewing station is received.
- the instruction for viewing the information in the video format is performed until step S60.
- Step S60 processing the information in the video format based on the image processing technology, so that the information in the video format is displayed on the user interface with a naked-eye stereoscopic effect.
- the splitting is performed in units of frames, so that the information can be refined as much as possible, so that the subsequently determined object to be processed is more accurate.
- At least one white dividing line vertically running through the screen is set in each picture to be processed, and each picture to be processed is divided into at least two display areas.
- the width of the white dividing line can be determined according to the size of the user interface used to play the information of the video format, and there is no specific limitation here.
- the white dividing line that is set to penetrate the screen vertically can divide the picture to be processed into equal parts, that is, the display area obtained by dividing the white dividing line is preferably the same size.
- the object to be processed it is preferable to determine the object moving from back to front or from front to back in the information as the object to be processed, so that after subsequent processing, the displayed picture will have a kind of outward movement from the picture. The effect of moving out or moving from outside to the screen.
- FIG. 4, FIG. 5, and FIG. 6 For ease of understanding, a brief description will be given below in conjunction with FIG. 4, FIG. 5, and FIG. 6.
- Figures 4 to 6 show three pictures to be processed corresponding to three consecutive frames. At the center of the three pictures to be processed, a white dividing line that runs through the entire screen vertically is set, and each picture to be processed is divided into A display area and B display area.
- the processor directly activates the parallax barrier, so that the beam of the picture to be displayed passes through
- the light and dark interval stripes at different positions of the parallax barrier allow users to visually produce tiny parallaxes, thus giving the parallax principle to allow the user's left and right eyes to see different pixel content, thereby achieving the naked eye 3D effect.
- the information pushing method based on big data analysis provided in this embodiment, when the information pushed to the user is information in a video format, if the user triggers to view the video format
- the viewing instruction of the information is processed by processing the information in the video format based on image processing technology, so that the information in the video format can be displayed on the user interface in a naked-eye stereo effect for the user to view. Because the user does not need to pair 3D glasses when viewing naked-eye stereoscopic images, and the stereoscopic images are relatively two-dimensional, the user experience is better during the viewing process, which can attract users to watch, and further improve the browsing rate of information. Information efficiency.
- an embodiment of the present application also proposes a storage medium that stores an information push program based on big data analysis, and when the information push program based on big data analysis is executed by a processor, the above Steps of information push method based on big data analysis.
- the computer-readable storage medium may be non-volatile or volatile.
- Fig. 7 is a structural block diagram of a first embodiment of an information push device based on big data analysis in this application.
- the information pushing device based on big data analysis proposed in the embodiment of the present application includes: an acquisition module 7001, an analysis module 7002, a determination module 7003, and a pushing module 7004.
- the collection module 7001 is used to collect the streaming media information of the film and television works currently played on the user interface and the facial expressions of the users watching the film and television works in time periods;
- the analysis module 7002 is used to analyze the data in each period based on big data analysis technology Streaming media information is analyzed to obtain the user's first emotional change trend, and facial expressions in various periods are analyzed to obtain the user's second emotional change trend;
- the determining module 7003 is configured to obtain the user's second emotional change trend according to the first emotional change trend and The second emotion change trend determines the actual emotion information of the user;
- the push module 7004 is configured to select an information push model suitable for the user from a pre-built information push model management library according to the actual emotion information, And adopt the information push model to push information for the user.
- this embodiment provides a way to collect facial expressions of users watching the film and television works in time intervals, which is roughly as follows:
- (3-1) Traverse each video segment, and extract the facial feature points of the user from the current video segment traversed according to the facial feature detection model obtained by pre-training;
- each facial feature point perform facial area division on the face of the user to obtain a facial feature area corresponding to each facial feature point;
- (3-4) Determine the facial expression of the user in the current video clip traversed according to the velocity vector of each facial feature point
- the velocity vector of the facial feature point in the current facial feature region is inferred.
- the push module 7004 in order to ensure that the push module 7004 can select an information push model suitable for the user from a pre-built information push model management library according to the actual emotional information, and then use the selected information push model
- the information push model pushes information for the user, and the information push device based on big data analysis provided in this embodiment may further include a building module.
- the construction model is used to construct the information pushing model first when the pushing module 7004 performs the above operations.
- the construction model and the information push model constructed include not only the information push model selected by the push module 7001 suitable for the user, but also the information push model for the emotions of other users, namely information push
- information push model for the emotions of other users, namely information push
- this implementation provides a specific construction process, which is roughly as follows:
- the information push device when pushing information for users, estimates the user’s first emotional change trend based on the film and television works watched by the user.
- the facial expressions when watching the film and television works determine the user’s second emotion change trend, and then according to the first emotion change trend and the second emotion change trend, determine the user’s actual emotion information, and finally based on the determined actual emotion information, build from the
- the information push model management library selects an information push model suitable for the user, and uses the information push model to push information that meets the current actual mood for the user, so as to ensure that the information pushed to the user is suitable for the user’s current actual needs The information, which greatly improves the accuracy and effectiveness of information push.
- the information pushed to the user may be text information, voice information, or video information. Therefore, in order to avoid the problem that the existing video format information has a single picture and cannot attract users during the playback process, in this embodiment, when it is determined that the information pushed to the user is a video format, the video format information is processed, and then The information in the video format is displayed on the user interface with a naked-eye stereoscopic effect, that is, naked-eye 3D (3Dimensions), thereby attracting users to view.
- a naked-eye stereoscopic effect that is, naked-eye 3D (3Dimensions
- the information pushing device based on big data analysis further includes: a monitoring module and a processing module.
- the monitoring module is configured to monitor whether a viewing instruction triggered by the user to view the information in the video format is received.
- the processing module is configured to process the information in the video format based on image processing technology when receiving a viewing instruction triggered by the user to view the information in the video format, so that the information in the video format is It is displayed on the user interface with a naked-eye stereoscopic effect.
- each white dividing line that runs vertically through the screen is set in each picture to be processed, and each picture to be processed is divided into at least two display areas;
- the pictures to be displayed are displayed in the order of playing in the information of the video format, so that the information of the video format is displayed on the user interface with a naked-eye stereoscopic effect.
- the information pushing device when the information pushed to the user is video format information, if the user triggers the viewing of the video format
- the viewing instruction of the information is processed by processing the information in the video format based on image processing technology, so that the information in the video format can be displayed on the user interface in a naked-eye stereo effect for the user to view. Because the user does not need to pair 3D glasses when viewing naked-eye stereoscopic images, and the stereoscopic images are relatively two-dimensional, the user experience is better during the viewing process, which can attract users to watch, and further improve the browsing rate of information. Information efficiency.
- the method of the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is better. ⁇ Based on this understanding, the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product.
- the computer software product is stored in a storage medium (such as Read Only Memory). , ROM)/RAM, magnetic disk, optical disk), including several instructions to make a terminal device (can be a mobile phone, computer, server, or network device, etc.) execute the method described in each embodiment of the present application.
Abstract
Description
Claims (20)
- 一种基于大数据分析的信息推送方法,其中,所述方法包括:分时段采集当前用户界面播放的影视作品的流媒体信息和观看所述影视作品的用户的面部表情;基于大数据分析技术,对各时段的流媒体信息进行分析得到所述用户的第一情绪变化趋势,对各时段的面部表情进行分析得到所述用户的第二情绪变化趋势;根据所述第一情绪变化趋势和所述第二情绪变化趋势,确定所述用户的实际情绪信息;根据所述实际情绪信息,从预先构建的信息推送模型管理库中选取适合所述用户的信息推送模型,并采用所述信息推送模型为所述用户推送信息。
- 如权利要求1所述的方法,其中,所述分时段采集观看所述影视作品的用户的面部表情的步骤,包括:在播放所述影视作品的过程中,采集包含所述用户的人脸的视频信息;从所述视频信息中截取对应各时段的流媒体信息的视频片段;对各视频片段中所述用户的人脸进行分析,得到各时段观看所述影视作品的所述用户的面部表情。
- 如权利要求2所述的方法,其中,所述对各视频片段中所述用户的人脸进行分析,得到各时段观看所述影视作品的所述用户的面部表情的步骤,包括:对各视频片段进行遍历,根据预先训练获得的人脸特征检测模型,从遍历到的当前视频片段中提取所述用户的面部特征点;根据各面部特征点,对所述用户的人脸进行面部区域划分,得到与各面部特征点对应的面部特征区域;基于光流法,确定各面部区域中的面部特征点的速度向量,所述速度向量用于表示各面部特征点的运动速度信息和运动方向信息;根据各面部特征点的速度向量,确定遍历到的当前视频片段中所述用户的面部表情;按照各时段流媒体信息在所述影视作品中的播放顺序,按序排列各视频片段中所述用户的面部表情,得到各时段观看所述影视作品的所述用户的面部表情。
- 如权利要求3所述的方法,其中,所述确定各面部区域中的面部特征点的速度向量的步骤,包括:遍历各面部特征区域,检测遍历到的当前面部特征区域中的面部特征点在相邻两个图像帧之间的像素变化强度;根据所述像素变化强度,推断所述当前面部特征区域中的面部特征点的速度向量。
- 如权利要求1至4任一项所述的方法,其中,所述根据所述实际情绪信息,从预先构建的信息推送模型管理库中选取适合所述用户的信息推送模型的步骤之前,所述方法还包括:构建所述信息推送模型;其中,所述构建所述信息推送模型的步骤,包括:接收数据采集指令,从所述数据采集指令中提取待采集的训练数据的网络地址;根据所述网络地址对网络爬虫进行配置,利用所述网络爬虫从所述网络地址对应的网页中获取所述训练数据;根据所述训练数据和预先确定的机器学习算法,规划学习路径;根据所述学习路径和所述训练数据,构建训练模型;根据预设的信息推送模型对应的业务需求,确定学习目标;采用所述机器学习算法,对所述训练模型进行迭代训练;在训练得到的训练结果与所述学习目标的匹配度大于预设阈值时,确定得到所述信息推送模型。
- 如权利要求1至4任一项所述的方法,其中,所述信息为视频格式;所述采用所述信息推送模型为所述用户推送信息的步骤之后,所述方法还包括:若接收到所述用户触发的查看所述视频格式的信息的查看指令,则基于图像处理技术,对所述视频格式的信息进行处理,以使所述视频格式的信息以裸眼立体效果展示在所述用户界面。
- 如权利要求6所述的方法,其中,所述基于图像处理技术,对所述视频格式的信息进行处理,以使所述视频格式的信息以裸眼立体效果展示在所述用户界面的步骤,包括:以帧为单位,对所述视频格式的信息进行拆分,得到至少一张待处理图片;根据预设间隔,在各待处理图片中分别设置至少一条垂直贯穿画面的白色分割线,将各待处理图片划分为至少两个显示区域;对各待处理图片进行分析,将所述视频格式的信息中位置发生变化的对象确定为待处理对象;基于图像处理技术,按照所述待处理对象在所述视频格式的信息中位置发生变化的顺序,将所述待处理对象被所述白色分割线遮挡的部分进行还原,得到待展示图片;按照各待展示图片在所述视频格式的信息中的播放顺序进行展示,以使所述视频格式的信息以裸眼立体效果展示在所述用户界面。
- 一种基于大数据分析的信息推送设备,其中,所述设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的基于大数据分析的信息推送程序,所述基于大数据分析的信息推送程序配置为被处理器执行实现如下步骤:分时段采集当前用户界面播放的影视作品的流媒体信息和观看所述影视作品的用户的面部表情;基于大数据分析技术,对各时段的流媒体信息进行分析得到所述用户的第一情绪变化趋势,对各时段的面部表情进行分析得到所述用户的第二情绪变化趋势;根据所述第一情绪变化趋势和所述第二情绪变化趋势,确定所述用户的实际情绪信息;根据所述实际情绪信息,从预先构建的信息推送模型管理库中选取适合所述用户的信息推送模型,并采用所述信息推送模型为所述用户推送信息。
- 如权利要求8所述的基于大数据分析的信息推送设备,其中,所述信息推送程序被处理器执行实现所述分时段采集观看所述影视作品的用户的面部表情的步骤,包括:在播放所述影视作品的过程中,采集包含所述用户的人脸的视频信息;从所述视频信息中截取对应各时段的流媒体信息的视频片段;对各视频片段中所述用户的人脸进行分析,得到各时段观看所述影视作品的所述用户的面部表情。
- 如权利要求9所述的基于大数据分析的信息推送设备,其中,所述信息推送程序被处理器执行实现所述对各视频片段中所述用户的人脸进行分析,得到各时段观看所述影视作品的所述用户的面部表情的步骤,包括:对各视频片段进行遍历,根据预先训练获得的人脸特征检测模型,从遍历到的当前视频片段中提取所述用户的面部特征点;根据各面部特征点,对所述用户的人脸进行面部区域划分,得到与各面部特征点对应的面部特征区域;基于光流法,确定各面部区域中的面部特征点的速度向量,所述速度向量用于表示各面部特征点的运动速度信息和运动方向信息;根据各面部特征点的速度向量,确定遍历到的当前视频片段中所述用户的面部表情;按照各时段流媒体信息在所述影视作品中的播放顺序,按序排列各视频片段中所述用户的面部表情,得到各时段观看所述影视作品的所述用户的面部表情。
- 如权利要求10所述的基于大数据分析的信息推送设备,其中,所述信息推送程序被处理器执行实现所述确定各面部区域中的面部特征点的速度向量的步骤,包括:遍历各面部特征区域,检测遍历到的当前面部特征区域中的面部特征点在相邻两个图像帧之间的像素变化强度;根据所述像素变化强度,推断所述当前面部特征区域中的面部特征点的速度向量。
- 如权利要求8至11任一项所述的基于大数据分析的信息推送设备,其中,所述根据所述实际情绪信息,从预先构建的信息推送模型管理库中选取适合所述用户的信息推送模型的步骤之前,所述信息推送程序还被处理器执行实现:构建所述信息推送模型;其中,所述构建所述信息推送模型的步骤,包括:接收数据采集指令,从所述数据采集指令中提取待采集的训练数据的网络地址;根据所述网络地址对网络爬虫进行配置,利用所述网络爬虫从所述网络地址对应的网页中获取所述训练数据;根据所述训练数据和预先确定的机器学习算法,规划学习路径;根据所述学习路径和所述训练数据,构建训练模型;根据预设的信息推送模型对应的业务需求,确定学习目标;采用所述机器学习算法,对所述训练模型进行迭代训练;在训练得到的训练结果与所述学习目标的匹配度大于预设阈值时,确定得到所述信息推送模型。
- 如权利要求8至11任一项所述的基于大数据分析的信息推送设备,其中,所述信息为视频格式;所述采用所述信息推送模型为所述用户推送信息的步骤之后,所述信息推送程序还被处理器执行实现:若接收到所述用户触发的查看所述视频格式的信息的查看指令,则基于图像处理技术,对所述视频格式的信息进行处理,以使所述视频格式的信息以裸眼立体效果展示在所述用户界面。
- 如权利要求13所述的基于大数据分析的信息推送设备,其中,所述信息推送程序被处理器执行实现所述基于图像处理技术,对所述视频格式的信息进行处理,以使所述视频格式的信息以裸眼立体效果展示在所述用户界面的步骤,包括:以帧为单位,对所述视频格式的信息进行拆分,得到至少一张待处理图片;根据预设间隔,在各待处理图片中分别设置至少一条垂直贯穿画面的白色分割线,将各待处理图片划分为至少两个显示区域;对各待处理图片进行分析,将所述视频格式的信息中位置发生变化的对象确定为待处理对象;基于图像处理技术,按照所述待处理对象在所述视频格式的信息中位置发生变化的顺序,将所述待处理对象被所述白色分割线遮挡的部分进行还原,得到待展示图片;按照各待展示图片在所述视频格式的信息中的播放顺序进行展示,以使所述视频格式的信息以裸眼立体效果展示在所述用户界面。
- 一种存储介质,其中,所述存储介质上存储有基于大数据分析的信息推送程序,所述基于大数据分析的信息推送程序被处理器执行时实现如下步骤:分时段采集当前用户界面播放的影视作品的流媒体信息和观看所述影视作品的用户的面部表情;基于大数据分析技术,对各时段的流媒体信息进行分析得到所述用户的第一情绪变化趋势,对各时段的面部表情进行分析得到所述用户的第二情绪变化趋势;根据所述第一情绪变化趋势和所述第二情绪变化趋势,确定所述用户的实际情绪信息;根据所述实际情绪信息,从预先构建的信息推送模型管理库中选取适合所述用户的信息推送模型,并采用所述信息推送模型为所述用户推送信息。
- 如权利要求15所述的存储介质,其中,所述信息推送程序被处理器执行实现所述分时段采集观看所述影视作品的用户的面部表情的步骤,包括:在播放所述影视作品的过程中,采集包含所述用户的人脸的视频信息;从所述视频信息中截取对应各时段的流媒体信息的视频片段;对各视频片段中所述用户的人脸进行分析,得到各时段观看所述影视作品的所述用户的面部表情。
- 如权利要求16所述的存储介质,其中,所述信息推送程序被处理器执行实现所述对各视频片段中所述用户的人脸进行分析,得到各时段观看所述影视作品的所述用户的面部表情的步骤,包括:对各视频片段进行遍历,根据预先训练获得的人脸特征检测模型,从遍历到的当前视频片段中提取所述用户的面部特征点;根据各面部特征点,对所述用户的人脸进行面部区域划分,得到与各面部特征点对应的面部特征区域;基于光流法,确定各面部区域中的面部特征点的速度向量,所述速度向量用于表示各面部特征点的运动速度信息和运动方向信息;根据各面部特征点的速度向量,确定遍历到的当前视频片段中所述用户的面部表情;按照各时段流媒体信息在所述影视作品中的播放顺序,按序排列各视频片段中所述用户的面部表情,得到各时段观看所述影视作品的所述用户的面部表情。
- 如权利要求17所述的存储介质,其中,所述信息推送程序被处理器执行实现所述确定各面部区域中的面部特征点的速度向量的步骤,包括:遍历各面部特征区域,检测遍历到的当前面部特征区域中的面部特征点在相邻两个图像帧之间的像素变化强度;根据所述像素变化强度,推断所述当前面部特征区域中的面部特征点的速度向量。
- 如权利要求15至18任一项所述的存储介质,其中,所述根据所述实际情绪信息,从预先构建的信息推送模型管理库中选取适合所述用户的信息推送模型的步骤之前,所述信息推送程序还被处理器执行实现:构建所述信息推送模型;其中,所述构建所述信息推送模型的步骤,包括:接收数据采集指令,从所述数据采集指令中提取待采集的训练数据的网络地址;根据所述网络地址对网络爬虫进行配置,利用所述网络爬虫从所述网络地址对应的网页中获取所述训练数据;根据所述训练数据和预先确定的机器学习算法,规划学习路径;根据所述学习路径和所述训练数据,构建训练模型;根据预设的信息推送模型对应的业务需求,确定学习目标;采用所述机器学习算法,对所述训练模型进行迭代训练;在训练得到的训练结果与所述学习目标的匹配度大于预设阈值时,确定得到所述信息推送模型。
- 如权利要求15至18任一项所述的存储介质,其中,所述信息为视频格式;所述采用所述信息推送模型为所述用户推送信息的步骤之后,所述信息推送程序还被处理器执行实现:若接收到所述用户触发的查看所述视频格式的信息的查看指令,则基于图像处理技术,对所述视频格式的信息进行处理,以使所述视频格式的信息以裸眼立体效果展示在所述用户界面。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910539938.3 | 2019-06-19 | ||
CN201910539938.3A CN110390048A (zh) | 2019-06-19 | 2019-06-19 | 基于大数据分析的信息推送方法、装置、设备及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020253372A1 true WO2020253372A1 (zh) | 2020-12-24 |
Family
ID=68285621
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/086475 WO2020253372A1 (zh) | 2019-06-19 | 2020-04-23 | 基于大数据分析的信息推送方法、装置、设备及存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110390048A (zh) |
WO (1) | WO2020253372A1 (zh) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112637363A (zh) * | 2021-01-05 | 2021-04-09 | 上海臻琴文化传播有限公司 | 一种信息流推送处理方法、系统、装置和存储介质 |
CN112948622A (zh) * | 2021-03-16 | 2021-06-11 | 深圳市火乐科技发展有限公司 | 一种展示内容的控制方法及装置 |
CN114491730A (zh) * | 2021-12-23 | 2022-05-13 | 中国铁道科学研究院集团有限公司 | 一种高速铁路路基结构动力安定分析迭代方法及装置 |
CN116825365A (zh) * | 2023-08-30 | 2023-09-29 | 安徽爱学堂教育科技有限公司 | 基于多角度微表情的心理健康分析方法 |
CN116955830A (zh) * | 2023-08-25 | 2023-10-27 | 成都中康大成环保科技有限公司 | 基于吸烟舱的信息推送方法、计算机设备与可读存储介质 |
CN117593058A (zh) * | 2023-12-05 | 2024-02-23 | 北京鸿途信达科技股份有限公司 | 基于情绪识别的广告匹配系统 |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110390048A (zh) * | 2019-06-19 | 2019-10-29 | 深圳壹账通智能科技有限公司 | 基于大数据分析的信息推送方法、装置、设备及存储介质 |
CN111428662A (zh) * | 2020-03-30 | 2020-07-17 | 齐鲁工业大学 | 基于人群属性的广告播放变化方法及系统 |
CN111726691A (zh) * | 2020-07-03 | 2020-09-29 | 北京字节跳动网络技术有限公司 | 视频推荐方法、装置、电子设备及计算机可读存储介质 |
CN113724838B (zh) * | 2020-08-19 | 2023-06-20 | 麦乐峰(厦门)智能科技有限公司 | 基于大数据的情绪鉴定系统 |
CN113285867B (zh) * | 2021-04-28 | 2023-08-22 | 青岛海尔科技有限公司 | 用于消息提醒的方法、系统、装置及设备 |
CN113312343A (zh) * | 2021-06-11 | 2021-08-27 | 北京思特奇信息技术股份有限公司 | 一种基于网络爬虫工具的商机管理方法和系统 |
CN115953724B (zh) * | 2023-03-14 | 2023-06-16 | 深圳市银弹科技有限公司 | 一种用户数据分析以及管理方法、装置、设备及存储介质 |
CN116662603B (zh) * | 2023-07-28 | 2023-10-20 | 江西云眼视界科技股份有限公司 | 基于kafka的时间轴管控方法、系统、电子设备及存储介质 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107958433A (zh) * | 2017-12-11 | 2018-04-24 | 吉林大学 | 一种基于人工智能的在线教育人机交互方法与系统 |
CN108427500A (zh) * | 2018-02-23 | 2018-08-21 | 广东欧珀移动通信有限公司 | 锁屏杂志推送方法及相关产品 |
WO2018188567A1 (zh) * | 2017-04-13 | 2018-10-18 | 腾讯科技(深圳)有限公司 | 服务器信息推送方法、终端信息发送方法、装置、系统以及存储介质 |
CN109102336A (zh) * | 2018-08-09 | 2018-12-28 | 安徽爱依特科技有限公司 | 基于图像分析的机器人广告推送方法及其系统 |
CN109614849A (zh) * | 2018-10-25 | 2019-04-12 | 深圳壹账通智能科技有限公司 | 基于生物识别的远程教学方法、装置、设备及存储介质 |
CN109672935A (zh) * | 2017-10-13 | 2019-04-23 | 富泰华工业(深圳)有限公司 | 基于用户情绪的视频推送系统及方法 |
CN109767290A (zh) * | 2018-12-18 | 2019-05-17 | 深圳壹账通智能科技有限公司 | 产品推送方法、装置、计算机设备和存储介质 |
CN110390048A (zh) * | 2019-06-19 | 2019-10-29 | 深圳壹账通智能科技有限公司 | 基于大数据分析的信息推送方法、装置、设备及存储介质 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10682086B2 (en) * | 2017-09-12 | 2020-06-16 | AebeZe Labs | Delivery of a digital therapeutic method and system |
-
2019
- 2019-06-19 CN CN201910539938.3A patent/CN110390048A/zh active Pending
-
2020
- 2020-04-23 WO PCT/CN2020/086475 patent/WO2020253372A1/zh active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018188567A1 (zh) * | 2017-04-13 | 2018-10-18 | 腾讯科技(深圳)有限公司 | 服务器信息推送方法、终端信息发送方法、装置、系统以及存储介质 |
CN109672935A (zh) * | 2017-10-13 | 2019-04-23 | 富泰华工业(深圳)有限公司 | 基于用户情绪的视频推送系统及方法 |
CN107958433A (zh) * | 2017-12-11 | 2018-04-24 | 吉林大学 | 一种基于人工智能的在线教育人机交互方法与系统 |
CN108427500A (zh) * | 2018-02-23 | 2018-08-21 | 广东欧珀移动通信有限公司 | 锁屏杂志推送方法及相关产品 |
CN109102336A (zh) * | 2018-08-09 | 2018-12-28 | 安徽爱依特科技有限公司 | 基于图像分析的机器人广告推送方法及其系统 |
CN109614849A (zh) * | 2018-10-25 | 2019-04-12 | 深圳壹账通智能科技有限公司 | 基于生物识别的远程教学方法、装置、设备及存储介质 |
CN109767290A (zh) * | 2018-12-18 | 2019-05-17 | 深圳壹账通智能科技有限公司 | 产品推送方法、装置、计算机设备和存储介质 |
CN110390048A (zh) * | 2019-06-19 | 2019-10-29 | 深圳壹账通智能科技有限公司 | 基于大数据分析的信息推送方法、装置、设备及存储介质 |
Non-Patent Citations (1)
Title |
---|
ANONYMOUS: "Why add two more white lines to these pictures immediately have the "naked eye 3D" effect", 16 February 2016 (2016-02-16), pages 1 - 9, XP055767221, Retrieved from the Internet <URL:https://www.sohu.com/a/59052194_323700> * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112637363A (zh) * | 2021-01-05 | 2021-04-09 | 上海臻琴文化传播有限公司 | 一种信息流推送处理方法、系统、装置和存储介质 |
CN112948622A (zh) * | 2021-03-16 | 2021-06-11 | 深圳市火乐科技发展有限公司 | 一种展示内容的控制方法及装置 |
CN114491730A (zh) * | 2021-12-23 | 2022-05-13 | 中国铁道科学研究院集团有限公司 | 一种高速铁路路基结构动力安定分析迭代方法及装置 |
CN116955830A (zh) * | 2023-08-25 | 2023-10-27 | 成都中康大成环保科技有限公司 | 基于吸烟舱的信息推送方法、计算机设备与可读存储介质 |
CN116955830B (zh) * | 2023-08-25 | 2024-01-16 | 成都中康大成环保科技有限公司 | 基于吸烟舱的信息推送方法、计算机设备与可读存储介质 |
CN116825365A (zh) * | 2023-08-30 | 2023-09-29 | 安徽爱学堂教育科技有限公司 | 基于多角度微表情的心理健康分析方法 |
CN116825365B (zh) * | 2023-08-30 | 2023-11-28 | 安徽爱学堂教育科技有限公司 | 基于多角度微表情的心理健康分析方法 |
CN117593058A (zh) * | 2023-12-05 | 2024-02-23 | 北京鸿途信达科技股份有限公司 | 基于情绪识别的广告匹配系统 |
Also Published As
Publication number | Publication date |
---|---|
CN110390048A (zh) | 2019-10-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020253372A1 (zh) | 基于大数据分析的信息推送方法、装置、设备及存储介质 | |
US11290775B2 (en) | Computerized system and method for automatically detecting and rendering highlights from streaming videos | |
US10979761B2 (en) | Intelligent video interaction method | |
CN103760968B (zh) | 数字标牌显示内容选择方法和装置 | |
JP6267861B2 (ja) | 対話型広告のための使用測定技法およびシステム | |
US10998003B2 (en) | Computerized system and method for automatically extracting GIFs from videos | |
US9554184B2 (en) | Method and apparatus for increasing user engagement with video advertisements and content by summarization | |
US20170065888A1 (en) | Identifying And Extracting Video Game Highlights | |
US20210334325A1 (en) | Method for displaying information, electronic device and system | |
US10939165B2 (en) | Facilitating television based interaction with social networking tools | |
CN111258435B (zh) | 多媒体资源的评论方法、装置、电子设备及存储介质 | |
CN108012162A (zh) | 内容推荐方法及装置 | |
TW201404127A (zh) | 多媒體評價系統、其裝置以及其方法 | |
CN103365936A (zh) | 视频推荐系统及其方法 | |
CN109635680A (zh) | 多任务属性识别方法、装置、电子设备及存储介质 | |
US10846517B1 (en) | Content modification via emotion detection | |
CN113766330A (zh) | 基于视频生成推荐信息的方法和装置 | |
US11277583B2 (en) | Personalized automatic video cropping | |
US10853417B2 (en) | Generating a platform-based representative image for a digital video | |
WO2020222157A1 (en) | Method and system for tracking, analyzing and reacting to user behaviour in digital and physical spaces | |
US11468675B1 (en) | Techniques for identifying objects from video content | |
CN114584824A (zh) | 数据处理方法、系统、电子设备、服务端及客户端设备 | |
US11615158B2 (en) | System and method for un-biasing user personalizations and recommendations | |
US20220327134A1 (en) | Method and system for determining rank positions of content elements by a ranking system | |
CN118042186A (zh) | 提供视频封面的方法、装置、电子设备及计算机可读介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20827171 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20827171 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 29/03/2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20827171 Country of ref document: EP Kind code of ref document: A1 |