WO2018157746A1 - Procédé et appareil de recommandation pour données vidéo - Google Patents

Procédé et appareil de recommandation pour données vidéo Download PDF

Info

Publication number
WO2018157746A1
WO2018157746A1 PCT/CN2018/076784 CN2018076784W WO2018157746A1 WO 2018157746 A1 WO2018157746 A1 WO 2018157746A1 CN 2018076784 W CN2018076784 W CN 2018076784W WO 2018157746 A1 WO2018157746 A1 WO 2018157746A1
Authority
WO
WIPO (PCT)
Prior art keywords
video data
feature information
target
quality
quality feature
Prior art date
Application number
PCT/CN2018/076784
Other languages
English (en)
Chinese (zh)
Inventor
张亚楠
王瑜
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2018157746A1 publication Critical patent/WO2018157746A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies

Definitions

  • the present invention relates to the field of data processing technologies, and in particular, to a method for recommending video data, a device for recommending video data, a method for generating a video data detection model, a device for generating a video data detection model, and a video.
  • a method of identifying data and a device for identifying video data are examples of a method for recommending video data.
  • e-commerce websites begin to use video content for shopping guide and marketing, that is, input corresponding text information according to operational needs, and then select appropriate video frames from the video library, and then adopt according to text semantics.
  • the video frame constructs a video of the appropriate scene and is recommended to the target user.
  • an embodiment of the present application is provided to provide a video data recommendation method, a video data recommendation device, and a video data detection model generation method, which overcome the above problems or at least partially solve the above problems.
  • a device for generating a video data detection model, a method for identifying video data, and a corresponding device for identifying video data is provided.
  • the present application discloses a method for recommending video data, including:
  • the target video data is recommended to the user.
  • the preset video data detection model is generated by:
  • Training is performed by using quality feature information of the plurality of forward sample video data and negative sample video data to generate a video data detection model.
  • the quality feature information includes image pixel feature information, continuous frame image object migration feature information, continuous frame image motion feature information, different frequency domain feature information of the image frame, image frame wavelet transform feature information, and/or, Image rotation operator feature information.
  • the step of separately extracting quality feature information of the plurality of sample video data includes:
  • the pixel information is separately subjected to convolution operation and pooling processing to obtain image pixel feature information.
  • the step of separately extracting quality feature information of the plurality of sample video data includes:
  • the number and frequency of occurrences of the object objects in the adjacent two frames of images are respectively determined to obtain continuous frame image object migration feature information.
  • the step of separately extracting quality feature information of the plurality of sample video data includes:
  • the geometric parameters of the shape features of the motion objects in the adjacent two frames of images are respectively determined to obtain continuous frame image motion feature information.
  • the step of separately extracting quality feature information of the plurality of sample video data includes:
  • the amplitude difference and the phase difference of the adjacent two frames of images are respectively determined to obtain different frequency domain feature information of the image frame.
  • the step of separately extracting quality feature information of the plurality of sample video data includes:
  • the change values of the wavelet coefficients of the adjacent two frames of images are respectively determined to obtain image frame wavelet transform feature information.
  • the step of separately extracting quality feature information of the plurality of sample video data includes:
  • the change values of the rotation operators of the adjacent two frames of images are respectively determined to obtain image rotation operator feature information.
  • the step of training by using the quality feature information of the plurality of forward sample video data and the negative sample video data to generate the video data detection model includes:
  • the target quality feature information is used to train the neural network model to generate a video data detection model.
  • the step of identifying target quality feature information from the normalized quality feature information includes:
  • the quality feature information identifying that the information entropy exceeds the first preset threshold is the target quality feature information.
  • it also includes:
  • the plurality of users are clustered into a plurality of user groups according to the attribute information, and the user groups have corresponding user labels.
  • the step of identifying the quality feature information by using a preset video data detection model to obtain target video data includes:
  • the video data whose quality score exceeds the second preset threshold is extracted as target video data.
  • the step of recommending the target video data to a user includes:
  • the target video data is recommended to the target user group.
  • the target video data has a corresponding video tag
  • the step of determining a target user group among the multiple user groups includes:
  • the present application discloses a method for generating a video data detection model, including:
  • Training is performed by using quality feature information of the plurality of forward sample video data and negative sample video data to generate a video data detection model.
  • the quality feature information includes image pixel feature information, continuous frame image object migration feature information, continuous frame image motion feature information, different frequency domain feature information of the image frame, image frame wavelet transform feature information, and/or, Image rotation operator feature information.
  • the present application discloses a method for identifying video data, including:
  • a recommendation device for video data including:
  • An obtaining module configured to acquire one or more video data to be detected
  • An extraction module configured to separately extract quality feature information of each video data to be detected
  • An identification module configured to identify the quality feature information by using a preset video data detection model to obtain target video data
  • a recommendation module for recommending the target video data to a user.
  • the preset video data detection model is generated by calling the following module:
  • a quality feature information extraction module configured to separately extract quality feature information of the plurality of sample video data, where the plurality of sample video data includes a plurality of forward sample video data and negative sample video data;
  • the video data detection model generating module is configured to perform training by using the quality feature information of the plurality of forward sample video data and the negative sample video data to generate a video data detection model.
  • the quality feature information includes image pixel feature information, continuous frame image object migration feature information, continuous frame image motion feature information, different frequency domain feature information of the image frame, image frame wavelet transform feature information, and/or, Image rotation operator feature information.
  • the quality feature information extraction module includes:
  • a pixel information extraction submodule configured to extract pixel information of each frame image of each sample video data
  • the pixel information processing sub-module is configured to perform convolution operation and pooling processing on the pixel information to obtain image pixel feature information.
  • the quality feature information extraction module further includes:
  • An object object recognition sub-module for identifying an object object in each frame image of each sample video data
  • the object object processing sub-module is configured to respectively determine the number and frequency of occurrences of the object objects in the adjacent two frames of images to obtain continuous frame image object migration feature information.
  • the quality feature information extraction module further includes:
  • a motion object recognition submodule configured to identify a shape feature of the motion object in each frame image of each sample video data
  • the action object processing sub-module is configured to respectively determine geometric parameters of the shape features of the action objects in the adjacent two frames of images to obtain continuous frame image action feature information.
  • the quality feature information extraction module further includes:
  • An amplitude and phase determination sub-module for determining a magnitude and a phase of each frame image of each sample video data
  • the amplitude and phase processing sub-module is configured to respectively determine amplitude difference and phase difference of adjacent two frames of images to obtain different frequency domain feature information of the image frame.
  • the quality feature information extraction module further includes:
  • a wavelet coefficient determining submodule for determining a wavelet coefficient of each frame image of each sample video data
  • the wavelet coefficient processing sub-module is configured to respectively determine the variation values of the wavelet coefficients of the adjacent two frames of images to obtain image frame wavelet transform feature information.
  • the quality feature information extraction module further includes:
  • a rotation operator determining sub-module for determining a rotation operator of each frame image of each sample video data
  • the rotation operator processing sub-module is configured to respectively determine a variation value of a rotation operator of the adjacent two frames of images to obtain image rotation operator feature information.
  • the video data detection model generating module includes:
  • a normalization processing sub-module configured to normalize quality characteristic information of the plurality of forward sample video data and negative-direction sample video data to obtain normalized quality feature information
  • a target quality feature information identifying submodule configured to identify target quality feature information from the normalized quality feature information
  • the video data detection model generation submodule is configured to perform neural network model training by using the target quality feature information, and generate a video data detection model.
  • the target quality feature information identifying submodule includes:
  • An information entropy determining unit configured to determine an information entropy of the normalized quality feature information
  • the target quality feature information identifying unit is configured to identify the quality feature information that the information entropy exceeds the first preset threshold as the target quality feature information.
  • generating the preset video data detection model further invokes the following modules:
  • An attribute information obtaining module configured to acquire attribute information of multiple users
  • the user group clustering module is configured to cluster the plurality of users into a plurality of user groups according to the attribute information, where the user group has a corresponding user label.
  • the identifying module includes:
  • a quality feature information identifying sub-module configured to identify, by using a preset video data detection model, quality characteristic information of the one or more video data to be detected, respectively, to obtain the one or more video data to be detected.
  • the target video data extraction sub-module is configured to extract video data whose quality score exceeds a second preset threshold as target video data.
  • the recommendation module includes:
  • a target user group determining submodule configured to determine a target user group among the plurality of user groups
  • the target video data recommendation submodule is configured to recommend the target video data to the target user group.
  • the target video data has a corresponding video tag
  • the target user group determining submodule includes:
  • the target user group determining unit is configured to determine a user group corresponding to the same user tag of the video tag of the target video data as a target user group.
  • a device for generating a video data detection model including:
  • a quality feature information extraction module configured to separately extract quality feature information of the plurality of sample video data, where the plurality of sample video data includes a plurality of forward sample video data and negative sample video data;
  • the video data detection model generating module is configured to perform training by using the quality feature information of the plurality of forward sample video data and the negative sample video data to generate a video data detection model.
  • the quality feature information includes image pixel feature information, continuous frame image object migration feature information, continuous frame image motion feature information, different frequency domain feature information of the image frame, image frame wavelet transform feature information, and/or, Image rotation operator feature information.
  • an apparatus for identifying video data including:
  • An obtaining module configured to acquire one or more video data to be detected
  • a sending module configured to send the one or more video data to be detected to a server, where the server is configured to separately identify the one or more video data to be detected to obtain a recognition result, and the identifying The result includes one or more candidate video data;
  • a receiving module configured to receive the one or more candidate video data returned by the server
  • a determining module configured to determine target video data in the one or more candidate video data
  • a presentation module for presenting the target video data.
  • the embodiments of the present application include the following advantages:
  • one or more video data to be detected are acquired, and quality characteristic information of each video data to be detected is separately extracted, and then the quality feature information is identified by using a preset video data detection model.
  • the target video data is obtained, and the target video data is recommended to the user, and the high-quality video data can be quickly selected by using the deep learning model.
  • the embodiment of the present application solves the problem that the prior art can only rely on manual identification and recommend the video to the user.
  • the problem of the segment improves the recognition efficiency of the video data and the accuracy of the recommendation.
  • Embodiment 1 is a flow chart showing the steps of Embodiment 1 of a method for recommending video data according to the present application;
  • FIG. 2 is a flow chart of steps of a second embodiment of a method for recommending video data according to the present application
  • FIG. 3 is a schematic block diagram of a method for recommending video data according to the present application.
  • FIG. 4 is a flow chart showing the steps of an embodiment of a method for generating a video data detection model according to the present application
  • FIG. 5 is a flow chart showing the steps of an embodiment of a method for identifying video data according to the present application
  • FIG. 6 is a structural block diagram of an embodiment of a device for recommending video data according to the present application.
  • FIG. 7 is a structural block diagram of an embodiment of a device for generating a video data detection model according to the present application.
  • FIG. 8 is a structural block diagram of an embodiment of an apparatus for identifying video data according to the present application.
  • FIG. 1 a flow chart of a first embodiment of a method for recommending video data according to the present application is shown. Specifically, the method may include the following steps:
  • Step 101 Acquire one or more video data to be detected
  • the video data to be detected may be an off-the-shelf video segment obtained from various ways, or may be a video segment that is synthesized in real time by extracting multiple video frames according to a certain rule in the video library.
  • the application embodiment does not limit the specific source and type of video data.
  • Step 102 Extract quality characteristic information of each video data to be detected, respectively.
  • the quality feature information of the video data may be feature information for identifying the quality of the video data, for example, image pixels of the video data, content displayed by the image, and the like. By identifying the quality characteristic information of the video data, it is possible to check the fluency, consistency, and the like of the video clip.
  • the type of the quality feature information to be extracted and the manner of the extraction are determined by a person skilled in the art according to actual needs, which is not limited by the embodiment of the present application.
  • Step 103 Identify the quality feature information by using a preset video data detection model to obtain target video data.
  • the preset video data detection model may be generated by training a plurality of sample video data in the training sample set, so that each quality feature information of the video data to be detected may be identified.
  • the plurality of sample video data in the training sample set may include a plurality of forward sample video data and a plurality of negative sample video data
  • the forward sample video data may be a video segment with better video quality, for example, A video clip with better fluency and coherence and a more uniform overall style between video frames.
  • the forward sample video data can be obtained by manual marking or web crawling; contrary to the forward sample video data,
  • the negative sample video data is a video segment with poor fluency, coherence, and overall style consistency between video frames.
  • such negative sample video data can be obtained by randomly synthesizing multiple video frames.
  • the source and the identification manner of the forward sample video data and the negative sample video data are not limited in the embodiment of the present application.
  • the quality feature information of the forward sample video data and the negative sample video data may be respectively extracted, and model training is performed to generate a video.
  • the data detection model is further configured to: after extracting the quality feature information of the video data to be detected, use the video data detection model to identify the quality feature information to obtain target video data.
  • the target video data may be a video clip of good quality obtained after being identified by the video data detection model.
  • Step 104 recommending the target video data to a user.
  • the target video data is recommended to the user, and the target video segment may be played in the user interface, or the target video segment may be pushed to the user.
  • the specific manner of recommending the target video data is not limited in this embodiment of the present application. .
  • one or more video data to be detected are acquired, and quality characteristic information of each video data to be detected is separately extracted, and then the quality feature information is performed by using a preset video data detection model.
  • the target video data is obtained to obtain the target video data, and the target video data is recommended to the user.
  • the deep learning model in the embodiment of the present application can quickly screen out high-quality video data, and solves the problem that the prior art can only rely on manual identification and recommend to the user.
  • the problem of video clips improves the efficiency of recognition of video data and the accuracy of recommendations.
  • the method may include the following steps:
  • Step 201 Extract quality feature information of a plurality of sample video data, where the plurality of sample video data includes a plurality of forward sample video data and negative direction sample video data;
  • FIG. 3 it is a functional block diagram of a method for recommending video data of the present application.
  • the embodiment of the present application performs feature extraction on the training sample set, and then performs deep learning modeling, and then uses the trained model to evaluate the detected video data, outputs corresponding quality scores, and simultaneously integrates users in the modeling process.
  • the attribute information clusters the user groups to implement video recommendations to the user community.
  • the forward sample video data may be a video segment with better video quality, for example, a video segment with better fluency and coherence and a uniform overall style between video frames, usually
  • the class forward sample video data can be obtained by manual marking. The operator checks the fluency and consistency of the video segment and the overall style between the video frames, so that the fluency and coherence are better, and the video frames are better.
  • the video clips with more consistent overall style are marked as forward sample video data, and can also be obtained through web crawling, that is, by capturing some high-quality videos with high click-through rate and many praises from the video website, as a network crawling Forward sample video data.
  • the negative sample video data is a video segment with poor fluency, coherence, and overall style consistency between video frames.
  • Such negative sample video data can pass through Multiple video frames are obtained by random synthesis. For example, some scattered video frame segments can be randomly extracted from multiple categories (such as travel, religion, and electronic products), and then the extracted video frame segments can be randomly combined and spliced. There are a large number of inconsistencies and semantic inconsistencies, so that such spliced video segments can be used as negative sample video data.
  • the obtained forward sample video data and negative sample video data can then be used as a training sample set for subsequent model training.
  • the quality feature information of the plurality of sample video data in the training sample set may be separately extracted first.
  • the quality feature information may include image pixel feature information, continuous frame image object migration feature information, continuous frame image motion feature information, different frequency domain feature information of the image frame, and image frame wavelet transform. Feature information, and/or image rotation operator feature information.
  • the following describes a method for extracting the above six kinds of feature information one by one.
  • pixel information of each frame image of each sample video data may be extracted, and then the pixel information is separately subjected to convolution operation and pooling processing to obtain image pixel features. information.
  • an image is obtained by intercepting each frame of a video segment. Therefore, pixel information in each frame image can be extracted separately as a feature set to be processed, and then the pixel information in the feature set is convoluted. And further performing pooling processing (max-pooling) on the feature set obtained after the convolution operation, thereby obtaining image pixel feature information.
  • the most significant description of the pixel information can be obtained.
  • the corresponding features not only have a reduced dimension, but also can express the original semantic meaning of the image.
  • the object objects in each frame image of each sample video data may be identified, and then the number and frequency of occurrences of the object objects in the adjacent two frames of images may be respectively determined to obtain continuous frame image object migration.
  • Feature information may be provided.
  • each frame image may be separately analyzed, and object objects in each frame image are identified and extracted, and then sorted according to the chronological order of each frame, thereby determining objects in adjacent two frames of images.
  • the partial adjacent image frames may be selected according to actual needs, and the number of adjacent image frames selected in this embodiment of the present application. Not limited.
  • the embodiment of the present application can identify the shape feature of the action object in each frame image of each sample video data, and then respectively The geometric parameters of the shape features of the motion objects in the adjacent two frames of images are determined to obtain continuous frame image motion feature information.
  • the motion object in each frame image can be separately identified, and the geometric boundary of the motion object can be determined, and then the geometric boundary of the motion in each frame image and the geometry of the motion in the previous frame image can be determined.
  • the shape boundaries are compared, and the geometric parameters of the shape features of the motion object are calculated according to the geometric affine transformation, and the geometric parameters are used as continuous frame image motion feature information.
  • the amplitude and phase of each frame image of each sample video data may be determined, and then the amplitude difference and phase of the adjacent two frames of images may be respectively determined. Poor to obtain different frequency domain feature information of the image frame.
  • the Fourier transform of each frame image may be first performed and the spectrum system features are extracted, and then the amplitude and phase features of each of the plurality of different spectrum systems are extracted, and these features are used as feature sets of each frame image. Then, the amplitude difference and the phase difference of the adjacent two frames are calculated, and the amplitude difference and phase difference of the adjacent two frames of images are obtained.
  • the embodiment of the present application may determine the wavelet coefficients of each frame image of each sample video data, and then determine the change values of the wavelet coefficients of the adjacent two frames respectively to obtain the image frame wavelet transform feature information. .
  • wavelet transform processing may be performed on each frame image to obtain corresponding wavelet coefficients, and then each frame image is sorted in time series, respectively, and wavelet coefficients of adjacent two frames are calculated, and wavelet coefficients are extracted. The changed difference is used as wavelet transform feature information.
  • the rotation operator of each frame image of each sample video data may be first determined, and then the change values of the rotation operators of the adjacent two frame images are respectively determined, and obtained.
  • Image rotation operator feature information for the image rotation operator feature information, the rotation operator of each frame image of each sample video data may be first determined, and then the change values of the rotation operators of the adjacent two frame images are respectively determined, and obtained.
  • each frame image may be first calculated, and then each frame image is sorted in time series, and the change value of the rotation operator between the adjacent two frames of images is determined to obtain image rotation operator feature information.
  • the rotation operator for calculating each frame image may adopt a SIFT (Scale-invariant feature transform) algorithm, which is an algorithm for detecting local features, by seeking a picture
  • SIFT Scale-invariant feature transform
  • the feature points and their scale and direction descriptors obtain features and perform image feature point matching. The essence is to find key points (feature points) in different scale spaces and calculate the direction of the key points.
  • Step 202 Perform training by using quality feature information of the plurality of forward sample video data and negative sample video data to generate a video data detection model.
  • the quality feature information may be used for model training to generate a video data detection model.
  • the quality feature information of the plurality of forward sample video data and the negative sample video data may be normalized to obtain normalized quality feature information, and the normalization may be complemented.
  • the missing value of the quality feature information is then identified from the normalized quality feature information, and then the target quality feature information is used for neural network model training to generate a video data detection model.
  • the identifying the target quality feature information may be screening out the high discriminative feature information.
  • the information entropy of the normalized quality feature information may be first determined. Due to the larger the information entropy, the richer information is enriched, and the importance of the feature is greater, and the more it should be retained. Therefore, the quality feature information whose information entropy exceeds the first preset threshold can be identified as the target quality feature. information.
  • the personalized feature information of the user may also be integrated, so that when the video data to be detected is identified, the evaluation of the video data and the user attribute may be combined. Improve the relevance and effectiveness of recommended video data.
  • attribute information of multiple users may be acquired, and then, according to the attribute information, the multiple users are clustered into multiple user groups, and the user groups have corresponding user labels, so that the training samples are When the centralized video data is used for model training, the attribute information of the user can be effectively integrated.
  • Step 203 Acquire one or more video data to be detected.
  • the video data to be detected may be a video segment synthesized in real time by extracting a plurality of video frames according to a certain rule in a video library.
  • a certain rule for example, when the e-commerce website uses the video content for shopping guide and marketing, a plurality of video frames matching the text content may be extracted from the massive video library according to the input text content, and then the multiple videos are Frames are combined into video clips according to certain rules.
  • the video data to be detected may be determined by other methods in the art.
  • the video data to be detected may also be an off-the-shelf video segment obtained from various paths, which is not limited in this embodiment of the present application.
  • Step 204 Extract quality feature information of each video data to be detected, respectively.
  • the quality feature information of the video data to be detected may also include image pixel feature information, continuous frame image object migration feature information, continuous frame image motion feature information, different frequency domain feature information of the image frame, and image frame wavelet. Transforming feature information, and/or image rotation operator feature information.
  • step 201 For the method for extracting the foregoing quality feature information, refer to step 201, which is not described in this step.
  • Step 205 Identify, by using a preset video data detection model, the quality feature information of the one or more video data to be detected to obtain a quality score of the one or more video data to be detected.
  • the quality feature information may be identified by using the trained video detection model, and based on the recognition result. Each video data to be detected is scored, and a corresponding quality score is output.
  • Step 206 Extract video data whose quality score exceeds a second preset threshold as target video data.
  • video data whose quality score exceeds the second preset threshold can be extracted as target video data.
  • a person skilled in the art can determine the size of the second preset threshold according to actual needs, which is not limited by the embodiment of the present application.
  • the video data with the highest quality score can be directly selected as the target video data, which is not limited in this embodiment of the present application.
  • Step 207 Determine a target user group among the plurality of user groups
  • the identified target video data may include a corresponding video tag to reflect the classification or other information of the video data.
  • the target user group for which the target video data is targeted may be identified according to the comparison between the video tag and the user tag of the user group. For example, it may be determined that the user group corresponding to the same user tag of the video tag of the target video data is the target user group.
  • a person skilled in the art may also determine the target user group in other manners, which is not limited by the embodiment of the present application.
  • Step 208 recommend the target video data to the target user group.
  • the target video data may be recommended to the target user group.
  • the video clip can be recommended to a potential consumer group, improving the user service experience and improving the user conversion rate.
  • FIG. 4 a flow chart of the steps of a method for generating a video data detection model of the present application is shown, which may specifically include the following steps:
  • Step 401 Extract quality feature information of a plurality of sample video data, where the plurality of sample video data includes a plurality of forward sample video data and negative direction sample video data;
  • Step 402 Perform training by using quality feature information of the plurality of forward sample video data and negative sample video data to generate a video data detection model.
  • the quality feature information may include image pixel feature information, continuous frame image object migration feature information, continuous frame image motion feature information, different frequency domain feature information of the image frame, and image frame wavelet transform feature information. And/or image rotation operator feature information.
  • the method for generating the video data detection model in the step 401 to the step 402 of the present embodiment is similar to the step 201 to the step 202 in the second embodiment of the video data recommendation method, and can be referred to each other.
  • FIG. 5 a flow chart of steps of an embodiment of a method for identifying video data according to the present application is shown. Specifically, the method may include the following steps:
  • Step 501 Acquire one or more video data to be detected.
  • a user interface may be provided.
  • an interactive interface is displayed on the display screen of the terminal, and the user may submit a detection request for one or more video data through the interaction interface.
  • the video data may be an off-the-shelf video segment obtained from various channels, or may be a video segment that is synthesized in real time by extracting a plurality of video frames according to a certain rule in the video library.
  • the specific source of the video data in the embodiment of the present application is The type is not limited.
  • Step 502 Send the one or more video data to be detected to a server, where the server is configured to separately identify the one or more video data to be detected to obtain a recognition result, where the identification result includes One or more candidate video data;
  • the terminal may send one or more video data to be detected to the server, and the server completes the identification of the video data to obtain a corresponding recognition result.
  • the identification result may include one or more candidate video data, and each candidate video data includes a corresponding quality score.
  • the process of identifying the one or more video data to be detected by the server is similar to the step 201 to step 205 in the foregoing embodiment, and may be referred to each other.
  • Step 503 Receive the one or more candidate video data returned by the server.
  • the server may return one or more candidate video data included in the identification result to the terminal.
  • Step 504 Determine target video data in the one or more candidate video data.
  • the target video data since the candidate video data has a corresponding quality score, the target video data may be determined according to the level of the quality score.
  • the higher the quality score the better the quality of the corresponding video data can be considered. Therefore, the video data with the highest quality score can be used as the target video data; or, the quality score can exceed a certain threshold. Determining a screening range in the plurality of candidate video data, and then determining the target video data from the plurality of candidate video data in the range according to actual requirements of the service, and the specific manner of determining the target video data in the embodiment of the present application Not limited. Of course, there may be more than one target video data, and there may be multiple, and this application does not limit this.
  • the target video data may be determined by the terminal according to the information input by the user, and may be specifically selected by the user in the multiple candidate video data, which is not limited in this embodiment of the present application.
  • Step 505 presenting the target video data.
  • the terminal may display the target video data on the interaction interface, for example, the specific information of the target video data may be displayed, or the target video data may be directly played, which is not limited in this embodiment of the present application.
  • the user can directly submit the identification request for the video data through the interaction interface, and the server identifies the video data targeted by the identification request, so that the user can The detection of the video data is completed according to actual needs, and the convenience of the user to judge the quality of the video data is improved.
  • FIG. 6 a structural block diagram of a device for recommending video data of the present application is shown, which may specifically include the following modules:
  • the obtaining module 601 is configured to acquire one or more video data to be detected
  • the extracting module 602 is configured to separately extract quality feature information of each video data to be detected
  • the identification module 603 is configured to identify the quality feature information by using a preset video data detection model to obtain target video data.
  • the recommendation module 604 is configured to recommend the target video data to the user.
  • the preset video data detection model may be generated by calling the following module:
  • a quality feature information extraction module configured to separately extract quality feature information of the plurality of sample video data, where the plurality of sample video data may include a plurality of forward sample video data and negative direction sample video data;
  • the video data detection model generating module is configured to perform training by using the quality feature information of the plurality of forward sample video data and the negative sample video data to generate a video data detection model.
  • the quality feature information may include image pixel feature information, continuous frame image object migration feature information, continuous frame image motion feature information, different frequency domain feature information of the image frame, and image frame wavelet transform feature information. And/or image rotation operator feature information.
  • the quality feature information extraction module may specifically include the following submodules:
  • a pixel information extraction submodule configured to extract pixel information of each frame image of each sample video data
  • the pixel information processing sub-module is configured to perform convolution operation and pooling processing on the pixel information to obtain image pixel feature information.
  • the quality feature information extraction module may further include the following sub-modules:
  • An object object recognition sub-module for identifying an object object in each frame image of each sample video data
  • the object object processing sub-module is configured to respectively determine the number and frequency of occurrences of the object objects in the adjacent two frames of images to obtain continuous frame image object migration feature information.
  • the quality feature information extraction module may further include the following sub-modules:
  • a motion object recognition submodule configured to identify a shape feature of the motion object in each frame image of each sample video data
  • the action object processing sub-module is configured to respectively determine geometric parameters of the shape features of the action objects in the adjacent two frames of images to obtain continuous frame image action feature information.
  • the quality feature information extraction module may further include the following sub-modules:
  • An amplitude and phase determination sub-module for determining a magnitude and a phase of each frame image of each sample video data
  • the amplitude and phase processing sub-module is configured to respectively determine the amplitude difference and the phase difference of the adjacent two frames of images to obtain different frequency domain characteristic information of the image frame.
  • the quality feature information extraction module may further include the following sub-modules:
  • a wavelet coefficient determining submodule for determining a wavelet coefficient of each frame image of each sample video data
  • the wavelet coefficient processing sub-module is configured to respectively determine the variation values of the wavelet coefficients of the adjacent two frames of images to obtain image frame wavelet transform feature information.
  • the quality feature information extraction module may further include the following sub-modules:
  • a rotation operator determining sub-module for determining a rotation operator of each frame image of each sample video data
  • the rotation operator processing sub-module is configured to respectively determine a variation value of a rotation operator of the adjacent two frames of images to obtain image rotation operator feature information.
  • the video data detection model generating module may specifically include the following submodules:
  • a normalization processing sub-module configured to normalize quality characteristic information of the plurality of forward sample video data and negative-direction sample video data to obtain normalized quality feature information
  • a target quality feature information identifying submodule configured to identify target quality feature information from the normalized quality feature information
  • the video data detection model generation submodule is configured to perform neural network model training by using the target quality feature information, and generate a video data detection model.
  • the target quality feature information identifying submodule may specifically include the following units:
  • An information entropy determining unit configured to determine an information entropy of the normalized quality feature information
  • the target quality feature information identifying unit is configured to identify the quality feature information that the information entropy exceeds the first preset threshold as the target quality feature information.
  • generating the preset video data detection model may also invoke the following modules:
  • An attribute information obtaining module configured to acquire attribute information of multiple users
  • the user group clustering module is configured to cluster the plurality of users into a plurality of user groups according to the attribute information, where the user group has a corresponding user label.
  • the identification module 603 may specifically include the following sub-modules:
  • a quality feature information identifying sub-module configured to identify, by using a preset video data detection model, quality characteristic information of the one or more video data to be detected, respectively, to obtain the one or more video data to be detected.
  • the target video data extraction sub-module is configured to extract video data whose quality score exceeds a second preset threshold as target video data.
  • the recommendation module 604 may specifically include the following submodules:
  • a target user group determining submodule configured to determine a target user group among the plurality of user groups
  • the target video data recommendation submodule is configured to recommend the target video data to the target user group.
  • the target video data may have a corresponding video label
  • the target user group determining sub-module may specifically include the following units:
  • the target user group determining unit is configured to determine a user group corresponding to the same user tag of the video tag of the target video data as a target user group.
  • FIG. 7 a structural block diagram of an embodiment of a device for generating a video data detection model of the present application is shown, which may specifically include the following modules:
  • the quality feature information extraction module 701 is configured to separately extract quality feature information of the plurality of sample video data, where the plurality of sample video data may include a plurality of forward sample video data and negative direction sample video data;
  • the video data detection model generating module 702 is configured to perform training by using the quality feature information of the plurality of forward sample video data and the negative sample video data to generate a video data detection model.
  • the quality feature information may include image pixel feature information, continuous frame image object migration feature information, continuous frame image motion feature information, different frequency domain feature information of the image frame, and image frame wavelet transform feature information. And/or image rotation operator feature information.
  • FIG. 8 a structural block diagram of an embodiment of an apparatus for identifying video data according to the present application is shown, which may specifically include the following modules:
  • the obtaining module 801 is configured to acquire one or more video data to be detected
  • a sending module 802 configured to send the one or more video data to be detected to a server, where the server is configured to separately identify the one or more video data to be detected to obtain a recognition result, where
  • the recognition result may include one or more candidate video data;
  • the receiving module 803 is configured to receive the one or more candidate video data returned by the server;
  • a determining module 804 configured to determine target video data in the one or more candidate video data
  • a presentation module 805 is configured to present the target video data.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
  • embodiments of the embodiments of the present application can be provided as a method, apparatus, or computer program product. Therefore, the embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware. Moreover, embodiments of the present application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • the computer device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • the memory may include non-persistent memory, random access memory (RAM), and/or non-volatile memory in a computer readable medium, such as read only memory (ROM) or flash memory.
  • RAM random access memory
  • ROM read only memory
  • Memory is an example of a computer readable medium.
  • Computer readable media includes both permanent and non-persistent, removable and non-removable media.
  • Information storage can be implemented by any method or technology. The information can be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory. (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disk read only memory (CD-ROM), digital versatile disk (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape storage or other magnetic storage devices or any other non-transportable media can be used to store information that can be accessed by a computing device.
  • computer readable media does not include non-persistent computer readable media, such as modulated data signals and carrier waves.
  • Embodiments of the present application are described with reference to flowcharts and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the present application. It will be understood that each flow and/or block of the flowchart illustrations and/or FIG.
  • These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing terminal device to produce a machine such that instructions are executed by a processor of a computer or other programmable data processing terminal device
  • Means are provided for implementing the functions specified in one or more of the flow or in one or more blocks of the flow chart.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing terminal device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the instruction device implements the functions specified in one or more blocks of the flowchart or in a flow or block of the flowchart.
  • a method for recommending video data provided by the present application a recommendation device for video data, a method for generating a video data detection model, a device for generating a video data detection model, a method for identifying video data, and A device for identifying video data is described in detail.
  • the principles and implementations of the present application are described in the following. The description of the above embodiments is only used to help understand the method and core idea of the present application; In the meantime, the present invention is not limited to the scope of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un procédé et un appareil de recommandation destinés à des données vidéo. Le procédé de recommandation consiste à : acquérir un ou plusieurs éléments de données vidéo à détecter ; extraire respectivement des informations de caractéristique de qualité concernant chaque élément de données vidéo à détecter ; reconnaître les informations de caractéristique de qualité à l'aide d'un modèle de détection de données vidéo prédéfini, de façon à obtenir des données vidéo cibles ; et recommander les données vidéo cibles à un utilisateur. Dans les modes de réalisation de la présente invention, des données vidéo de haute qualité peuvent être rapidement criblées à l'aide d'un modèle d'apprentissage profond. Les modes de réalisation de la présente invention résolvent le problème de l'état de la technique selon lequel un clip vidéo ne peut être recommandé qu'à un utilisateur en se basant sur une reconnaissance artificielle, ce qui permet d'améliorer l'efficacité de reconnaissance pour des données vidéo et le taux de précision de recommandation.
PCT/CN2018/076784 2017-02-28 2018-02-14 Procédé et appareil de recommandation pour données vidéo WO2018157746A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710113741.4 2017-02-28
CN201710113741.4A CN108509457A (zh) 2017-02-28 2017-02-28 一种视频数据的推荐方法和装置

Publications (1)

Publication Number Publication Date
WO2018157746A1 true WO2018157746A1 (fr) 2018-09-07

Family

ID=63369778

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/076784 WO2018157746A1 (fr) 2017-02-28 2018-02-14 Procédé et appareil de recommandation pour données vidéo

Country Status (3)

Country Link
CN (1) CN108509457A (fr)
TW (1) TWI753044B (fr)
WO (1) WO2018157746A1 (fr)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110879851A (zh) * 2019-10-15 2020-03-13 北京三快在线科技有限公司 视频动态封面生成方法、装置、电子设备及可读存储介质
CN111126262A (zh) * 2019-12-24 2020-05-08 中国科学院自动化研究所 基于图神经网络的视频精彩片段检测方法及装置
WO2020093914A1 (fr) * 2018-11-08 2020-05-14 Alibaba Group Holding Limited Apprentissage résiduel profond pondéré par un contenu destiné à un filtrage en boucle vidéo
CN111191054A (zh) * 2019-12-18 2020-05-22 腾讯科技(深圳)有限公司 媒体数据的推荐方法、装置
CN111950360A (zh) * 2020-07-06 2020-11-17 北京奇艺世纪科技有限公司 一种识别侵权用户的方法及装置
CN112100441A (zh) * 2020-09-17 2020-12-18 咪咕文化科技有限公司 视频推荐方法、电子设备和计算机可读存储介质
CN112464083A (zh) * 2020-11-16 2021-03-09 北京达佳互联信息技术有限公司 模型训练方法、作品推送方法、装置、电子设备及存储介质
CN112749297A (zh) * 2020-03-03 2021-05-04 腾讯科技(深圳)有限公司 视频推荐方法、装置、计算机设备和计算机可读存储介质
WO2024057124A1 (fr) * 2022-09-14 2024-03-21 Digit7 India Private Limited Système et procédé d'étiquetage automatique de contenu média

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109242030A (zh) * 2018-09-21 2019-01-18 京东方科技集团股份有限公司 画单生成方法及装置,电子设备,计算机可读存储介质
CN109068180B (zh) * 2018-09-28 2021-02-02 武汉斗鱼网络科技有限公司 一种确定视频精选集的方法以及相关设备
CN109614537A (zh) * 2018-12-06 2019-04-12 北京百度网讯科技有限公司 用于生成视频的方法、装置、设备和存储介质
CN109729395B (zh) * 2018-12-14 2022-02-08 广州市百果园信息技术有限公司 视频质量评估方法、装置、存储介质和计算机设备
CN111353597B (zh) * 2018-12-24 2023-12-05 杭州海康威视数字技术股份有限公司 一种目标检测神经网络训练方法和装置
CN111401100B (zh) 2018-12-28 2021-02-09 广州市百果园信息技术有限公司 视频质量评估方法、装置、设备及存储介质
CN109685631B (zh) * 2019-01-10 2021-06-01 博拉网络股份有限公司 一种基于大数据用户行为分析的个性化推荐方法
CN112464027A (zh) * 2019-09-06 2021-03-09 腾讯科技(深圳)有限公司 一种视频检测方法、装置及存储介质
CN111209897B (zh) * 2020-03-09 2023-06-20 深圳市雅阅科技有限公司 视频处理的方法、装置和存储介质
CN111491187B (zh) * 2020-04-15 2023-10-31 腾讯科技(深圳)有限公司 视频的推荐方法、装置、设备及存储介质
CN111683273A (zh) * 2020-06-02 2020-09-18 中国联合网络通信集团有限公司 视频卡顿信息的确定方法及装置
CN113837820A (zh) * 2020-06-23 2021-12-24 阿里巴巴集团控股有限公司 数据处理方法、装置及设备
CN112199582B (zh) * 2020-09-21 2023-07-18 聚好看科技股份有限公司 一种内容推荐方法、装置、设备及介质
CN116708725B (zh) * 2023-08-07 2023-10-31 清华大学 基于语义编解码的低带宽人群场景安防监控方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101282481A (zh) * 2008-05-09 2008-10-08 中国传媒大学 一种基于人工神经网络的视频质量评价方法
US20110131595A1 (en) * 2009-12-02 2011-06-02 General Electric Company Methods and systems for online recommendation
CN104219575A (zh) * 2013-05-29 2014-12-17 酷盛(天津)科技有限公司 相关视频推荐方法及系统
CN104915861A (zh) * 2015-06-15 2015-09-16 浙江经贸职业技术学院 基于评分和标签构建用户群体模型的电子商务推荐方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI510064B (zh) * 2012-03-30 2015-11-21 Inst Information Industry 視訊推薦系統及其方法
CN104216960A (zh) * 2014-08-21 2014-12-17 北京奇艺世纪科技有限公司 一种视频推荐方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101282481A (zh) * 2008-05-09 2008-10-08 中国传媒大学 一种基于人工神经网络的视频质量评价方法
US20110131595A1 (en) * 2009-12-02 2011-06-02 General Electric Company Methods and systems for online recommendation
CN104219575A (zh) * 2013-05-29 2014-12-17 酷盛(天津)科技有限公司 相关视频推荐方法及系统
CN104915861A (zh) * 2015-06-15 2015-09-16 浙江经贸职业技术学院 基于评分和标签构建用户群体模型的电子商务推荐方法

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020093914A1 (fr) * 2018-11-08 2020-05-14 Alibaba Group Holding Limited Apprentissage résiduel profond pondéré par un contenu destiné à un filtrage en boucle vidéo
CN110879851A (zh) * 2019-10-15 2020-03-13 北京三快在线科技有限公司 视频动态封面生成方法、装置、电子设备及可读存储介质
CN111191054A (zh) * 2019-12-18 2020-05-22 腾讯科技(深圳)有限公司 媒体数据的推荐方法、装置
CN111191054B (zh) * 2019-12-18 2024-02-13 腾讯科技(深圳)有限公司 媒体数据的推荐方法、装置
CN111126262B (zh) * 2019-12-24 2023-04-28 中国科学院自动化研究所 基于图神经网络的视频精彩片段检测方法及装置
CN111126262A (zh) * 2019-12-24 2020-05-08 中国科学院自动化研究所 基于图神经网络的视频精彩片段检测方法及装置
CN112749297B (zh) * 2020-03-03 2023-07-21 腾讯科技(深圳)有限公司 视频推荐方法、装置、计算机设备和计算机可读存储介质
CN112749297A (zh) * 2020-03-03 2021-05-04 腾讯科技(深圳)有限公司 视频推荐方法、装置、计算机设备和计算机可读存储介质
CN111950360B (zh) * 2020-07-06 2023-08-18 北京奇艺世纪科技有限公司 一种识别侵权用户的方法及装置
CN111950360A (zh) * 2020-07-06 2020-11-17 北京奇艺世纪科技有限公司 一种识别侵权用户的方法及装置
CN112100441A (zh) * 2020-09-17 2020-12-18 咪咕文化科技有限公司 视频推荐方法、电子设备和计算机可读存储介质
CN112100441B (zh) * 2020-09-17 2024-04-09 咪咕文化科技有限公司 视频推荐方法、电子设备和计算机可读存储介质
CN112464083A (zh) * 2020-11-16 2021-03-09 北京达佳互联信息技术有限公司 模型训练方法、作品推送方法、装置、电子设备及存储介质
WO2024057124A1 (fr) * 2022-09-14 2024-03-21 Digit7 India Private Limited Système et procédé d'étiquetage automatique de contenu média

Also Published As

Publication number Publication date
TW201834463A (zh) 2018-09-16
CN108509457A (zh) 2018-09-07
TWI753044B (zh) 2022-01-21

Similar Documents

Publication Publication Date Title
WO2018157746A1 (fr) Procédé et appareil de recommandation pour données vidéo
CN108509465B (zh) 一种视频数据的推荐方法、装置和服务器
US20140172643A1 (en) System and method for categorizing an image
KR20230087622A (ko) 스트리밍 비디오 내의 객체를 검출하고, 필터링하고 식별하기 위한 방법 및 장치
CN110019943B (zh) 视频推荐方法、装置、电子设备和存储介质
WO2022033199A1 (fr) Procédé destiné à obtenir un portrait d'utilisateur et dispositif associé
CN104715023A (zh) 基于视频内容的商品推荐方法和系统
EP2633439A1 (fr) Recherche avec requêtes conjointes image-audio
US20190303499A1 (en) Systems and methods for determining video content relevance
Bedeli et al. Clothing identification via deep learning: forensic applications
JP5261493B2 (ja) 拡張画像識別
TWI705411B (zh) 社交業務特徵用戶的識別方法和裝置
Vijayarani et al. Multimedia mining research-an overview
CN113569740B (zh) 视频识别模型训练方法与装置、视频识别方法与装置
Tliba et al. Satsal: A multi-level self-attention based architecture for visual saliency prediction
Angadi et al. Multimodal sentiment analysis using reliefF feature selection and random forest classifier
Sebyakin et al. Spatio-temporal deepfake detection with deep neural networks
CN110363206B (zh) 数据对象的聚类、数据处理及数据识别方法
Chang et al. Human vision attention mechanism-inspired temporal-spatial feature pyramid for video saliency detection
Zhang et al. Deep learning features inspired saliency detection of 3D images
Merghani et al. The implication of spatial temporal changes on facial micro-expression analysis
Ou et al. An Intelligent Recommendation System for Real Estate Commodity.
Jiang et al. Video searching and fingerprint detection by using the image query and PlaceNet-based shot boundary detection method
YM et al. Analysis on Exposition of Speech Type Video Using SSD and CNN Techniques for Face Detection
David et al. Authentication of Vincent van Gogh’s work

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18760885

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18760885

Country of ref document: EP

Kind code of ref document: A1