EP4211629A1 - Test d'un précurseur de film - Google Patents

Test d'un précurseur de film

Info

Publication number
EP4211629A1
EP4211629A1 EP21772776.7A EP21772776A EP4211629A1 EP 4211629 A1 EP4211629 A1 EP 4211629A1 EP 21772776 A EP21772776 A EP 21772776A EP 4211629 A1 EP4211629 A1 EP 4211629A1
Authority
EP
European Patent Office
Prior art keywords
evaluation
film product
test
fvp
preliminary film
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP21772776.7A
Other languages
German (de)
English (en)
Inventor
Matthias Rosenberger
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Malao GmbH
Original Assignee
Malao GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Malao GmbH filed Critical Malao GmbH
Publication of EP4211629A1 publication Critical patent/EP4211629A1/fr
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06395Quality analysis or management
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/12Healthy persons not otherwise provided for, e.g. subjects of a marketing survey

Definitions

  • the invention relates to a method for testing a film precursor.
  • the invention also relates to a method for correcting a film precursor.
  • a preliminary film product checking device is also an object of the invention, as is a preliminary film product correction device.
  • a film goes through various stages of development during its production. For example, as part of the development process, a synopsis is first created that provides an initial overview of the film's subject matter. Later, a so-called treatment is developed, which can be seen as a preliminary form of a screenplay. A treatment should convey the complete, dramaturgically coherent story of a film, but without containing fully formulated scenes with complete dialogue, as is the case with a complete screenplay. Finally, the actual screenplay is created. After that, film recordings of individual scenes are shot.
  • This object is achieved by a method for checking a preliminary film product according to claim 1 , a method for correcting a preliminary film product according to patent claim 11 , a preliminary film product checking device according to claim 14 and a preliminary film product correction device according to claim 15 .
  • the preliminary film product is first made available to a selected group of test persons, i. H. made available or presented.
  • the presentation can include a visual display, a representation of text, a film or an audio reproduction, as well as any combination of the aforementioned types of presentation.
  • the presentation preferably takes place using a technical system, such as a personal computer, a laptop, a notebook, a tablet or a smartphone.
  • a preliminary film product is an intermediate result generated during the film production process, which provides an overview of the action and other aspects of the end product before the film production is completed or before the end product of the film production is created.
  • the preliminary film product represents information generated beforehand, for example a summary, with regard to the respective film product.
  • the film product itself is then produced in the film production process after the preliminary film product.
  • the film product itself is the end product of the film production, usually the finished film.
  • the preliminary film product is divided into individual sections for a finer resolution assessment or examination. If the preliminary film product includes, for example, a rough textual summary of the action of a film product, which is read or read to the test person, the text can be divided into individual chapters or scenes, paragraphs and sentences or half-sentences. If the preliminary film product is a film, the film can be divided into individual scenes or moments as sections, for example.
  • the individual sections of the preliminary film product are first received by the test persons and, at the same time or immediately afterwards, the test persons generate a number of different reaction signals, which preferably have information about their conscious or unconscious reactions to the sections of the preliminary film product received .
  • section-related reaction signals are then recorded, for example with the help of sensors.
  • the detection can take place through an input process by the test person, in which a physical signal generated by a conscious input action of the test person is detected by a user interface.
  • the detection can also be implemented by sensor-based measurement of a preferably unconscious physiological reaction of the test person. This fact will be explained later in detail.
  • the "division" of the preliminary film product into the individual sections of the preliminary film product can preferably take place before a presentation of the preliminary film product, i.e. the work is preferably divided into predetermined "content-related sections" as mentioned above, which are dependent on the content of the Depend on the factory, such as chapters, scenes, etc.
  • the division can also take place directly when the preliminary film product is created or can already be specified by a product from a previous production stage.
  • the sections can also simply be defined in terms of time and/or quantity (or based on time and/or quantity), e.g. B.
  • Each section includes a certain amount of time that is required for the presentation or reception (at least estimated) or a certain extent (for example a certain amount of text, in particular one line).
  • Different distribution methods can also be used in combination, e.g. B. a division into content-related sections and a further time-related and / or quantity-related subdivision or a parallel application of both methods.
  • the most advantageous variant may depend on the type of Pre-film product and the type of presentation depend. In the following, it is assumed as a preferred example, without restricting the generality, that initially (at least also) there is a division into content-related sections, unless otherwise mentioned.
  • the sections or their boundaries can also be marked in a suitable manner.
  • the complete preliminary film can be presented to the test persons section by section, with possible pauses in between to record the reaction signals.
  • the entire preliminary film product can also be presented together and the respectively recorded reaction signals are assigned to the section presented in connection with it (e.g. at the same time).
  • a “division” into sections can also be used, e.g. B. automatically, during the presentation of the preliminary film or the reception by the test persons, especially when it comes to a temporal or quantitative distribution.
  • a further, more refined subdivision of the sections into subsections is also possible in the method.
  • evaluation result data are now generated, which include information regarding the subjective quality of individual sections of the preliminary film product and thus also the film product that is still to be produced later.
  • a statement about the chances of success of a film can be made at an early stage, thus reducing the unnecessary use of resources in a production.
  • the evaluation can be carried out, for example, on the basis of a model relationship between possible reactions of the test persons and individual influencing variables that are related to the later success of a film product.
  • the evaluation can also be implemented using AI-based techniques. For example, on the basis of training data, a so-called deep learning algorithm can be used to generate or train a neural network, which determines the desired result data using the detected reaction signals. A specific example of this will be explained later. This variant can be adapted particularly flexibly to the properties of different test groups and test requirements.
  • the evaluation can also be carried out in part with the involvement of experts. For example, one or more experts can evaluate the texts or verbal reactions of the test persons, while non-verbal reactions are automatically evaluated.
  • an overall test result of the preliminary film product can also be determined depending on the evaluation result data in order to provide the people involved in the film production with a simplified decision-making aid.
  • the overall result provides information about the chances of success and the quality of individual passages and the film as a whole.
  • the overall result consists of a combination of ratings from a large number of test subjects and, with a sufficiently large number of test subjects, enables an approximately objective assessment (generated by subjective individual assessments) of the prospects of success of a film in the production process.
  • both the assessment process and the evaluation process are automated as far as possible or even completely by technical means, so that a relationship between experts for the preparation of an expert opinion with regard to the prospect of success can be avoided and the costs expended for this can be saved.
  • the method according to the invention for checking a preliminary film product is first carried out and then the next version of the preliminary film product is generated depending on the evaluation result data of the method carried out.
  • the production process of a film can advantageously include one or more correction phases in which a largely or completely automated correction of intermediate stages or final stages of a film production takes place.
  • the correction takes place on a broad database and is thus more objective than a correction based on the opinion of individual experts, which contributes to improved prospects of success for the film product produced.
  • the preliminary film product testing device has a playback unit for providing a preliminary film product for a selected group of test persons.
  • Part of the preliminary film product testing device according to the invention can also be a structuring unit for dividing the preliminary film product into individual sections for a finer resolution assessment or testing, e.g. B. to subdivide the preliminary film product into predetermined, in particular content-related, sections.
  • the text is then z. B. divided into individual chapters or scenes, paragraphs and sentences or clauses. If the preliminary film product is a film, the film is z. B. divided into individual scenes or moments as sections.
  • the preliminary film product can be divided into suitable sections during creation and z. B. be provided with appropriate section markers.
  • the film precursor testing device also has a detection unit for capturing reaction signals from test persons in sections during or at the same time as or immediately following the reception of the film precursor by the test persons.
  • Sectional detection means in particular that the individual reaction signals are unambiguously assigned to a specific section, so that a finely resolved examination and assessment of the preliminary film product is possible. It could, especially if the division z. B. should only take place during the presentation, the division or division into sections also by the acquisition unit and / or the playback unit or any other unit, especially if a time-based and / or quantity-based division is to take place.
  • reaction signals are based on the reactions and/or assessments of the test persons and are available as physically measurable signals.
  • Part of the preliminary film product testing device is also an evaluation data determination unit which is set up to automatically evaluate the recorded reaction signals in order to generate evaluation result data for the individual sections on the basis of the recorded reactions and/or assessments of the test persons.
  • the evaluation data determination unit also called an evaluation unit, can have further units, for example a text analysis unit, with which text entries are analyzed for their meaning and a quantitative evaluation variable is generated based on the meaning.
  • the preliminary film product testing device can optionally also include a result determination unit for determining an overall test result of the preliminary film product as a function of the evaluation result data.
  • the film precursor testing device shares the advantages of the method according to the invention for testing a film precursor.
  • the preliminary film product correction device has the preliminary film product checking device according to the invention and a correction unit for changing the preliminary film product depending on the evaluation results of the preliminary film product checking device according to the invention.
  • the correction unit can be set up to eliminate a section in the preliminary film product that is assessed as having little prospect of success.
  • the correction unit can, for example, accept input from a person involved in the film production to change a section or partial section of the preliminary film product.
  • the precursor film correction device according to the invention shares the advantages of the inventive method for correcting a precursor film.
  • Parts of the preliminary film product checking device according to the invention and of the preliminary film product correction device can be designed for the most part in the form of software components.
  • the reproduction unit the structuring unit, the acquisition unit, the evaluation data determination unit and the result determination unit as well as the correction unit.
  • these components can also be partially implemented in the form of software-supported hardware, for example FPGAs, PLDs or the like, particularly when particularly fast calculations are involved.
  • the interfaces required for example when it is only a matter of taking over data from other software components, can be designed as software interfaces. However, they can also be in the form of hardware interfaces that are controlled by suitable software.
  • a partial software implementation has the advantage that computer systems and networks already used in film production can be retrofitted in a simple manner by means of a software update in order to work in the manner according to the invention.
  • the object is also achieved by a corresponding computer program product with a computer program which can be loaded directly into a memory device of such a computer system, with program sections in order to carry out all the steps of the method for checking a preliminary film product or the method for correcting a preliminary film product if the computer program in running on the computer system.
  • such a computer program product may also include additional components such as documentation and/or additional components, including hardware components such as hardware keys (dongles, etc.) for using the software.
  • additional components such as documentation and/or additional components, including hardware components such as hardware keys (dongles, etc.) for using the software.
  • a computer-readable medium for example a memory stick, a hard disk or another transportable or permanently installed data medium, on which the program sections of the computer program that can be read and executed by a computer unit are stored, can be used for transport to the memory device of the computer system and/or for storage on the computer system .
  • the computer unit can, for example, have one or more microprocessors or the like working together.
  • the computer program can also be transmitted and the data stored via the Internet or a cloud-based solution.
  • Such a computer program product can preferably be in the form of a modular software solution which can be expanded to include hardware components.
  • the software comprises a server program which runs on a computer, e.g. B. a film production company, or stored in the cloud.
  • the software also includes an app, which is stored on the so-called client devices, preferably end devices such as smartphones, of the test persons. If a film pre-product is now to be tested using the method according to the invention, then using of the server program transmits the preliminary film product to the clients, for example the smartphones of the test subjects. With the help of the app, the test persons can now watch or listen to the preliminary film and generate reactions with corresponding status signals and assessments with corresponding input signals.
  • the app has program parts that are used to control sensors on the client or smartphone to record reactions and assessments.
  • the app may provide scroll tracking capability.
  • the app uses a camera to record eye movements or the color of the test person's skin.
  • a camera integrated into a client, for example a smartphone, of the test person can be used as a camera for recording the physiology of a test person.
  • the app can also include the ability to measure reading speed, for example based on eye movements or scrolling speed or scrolling behavior.
  • the app can have the ability to assign measurement times and/or input times to the recorded reactions and/or assessments.
  • the app preferably converts the recorded signals into measured values or input texts.
  • the server program accepts the measured values and input texts determined by the app as part of the assessments and reactions of the test subjects and carries out an evaluation based on the information transmitted in order to generate evaluation result data for the individual sections of the preview. During the evaluation, the reactions and assessments generated by the individual test persons are combined to form an evaluation result for the individual sections.
  • the server program also performs further processing steps, such as determining an overall test result and correcting the preliminary film product in order to improve the film production's chances of success. Further particularly advantageous configurations and developments of the invention result from the dependent claims and the following description, whereby the patent claims of a specific category can also be developed according to the dependent claims of another category and features of different exemplary embodiments can be combined to form new exemplary embodiments.
  • the reaction signals recorded as part of the method for checking a film precursor can include input signals or status sensor signals.
  • An input signal should be understood as a signal that is consciously transmitted by a test person via an interface.
  • the input signal includes information that is consciously generated and transmitted by the test person.
  • state sensor signals are generated by sensors that measure the state, in particular a physiological or biometric state, of a test person.
  • the status sensor signals thus include information that can be evaluated in order to determine information about the emotions and the unconscious attitude of the test person towards the preliminary film product.
  • Responses and/or assessments can be generated, for example, by a test subject through conscious input via a user interface.
  • a test person presses an input key or uses a pointing device to directly or indirectly touch an evaluation field on an image display device provided for such an input.
  • Such a device can, for example, comprise a touchscreen, a touchpad or the like. Conscious inputs have the advantage that they are very differentiated and can be used directly, and the information generated with them can therefore be easily evaluated.
  • a status sensor signal is particularly suitable for capturing biometric or physiological data that is generated unconsciously by a test subject, for example.
  • Unconscious reactions can become noticeable through physiological phenomena or other involuntary movements or changes in physical conditions, such as a change in body temperature, skin color, sweat production, blood pressure or heart rate.
  • Phenomena of this type can be detected by a sensor-based measurement of biometric and/or physiological data of the test persons.
  • the advantage of measuring unconscious reactions is that they can hardly be intentionally influenced by the test person and are therefore authentic and cannot be easily manipulated.
  • the test person does not have to cooperate when recording unconscious reactions or reaction signals based thereon, which is particularly advantageous for children as test persons.
  • the choice of film consumption is largely determined by emotional aspects, which can possibly be more authentically and precisely ascertained on the basis of involuntary reactions than with the help of detailed and well-thought-out conscious assessments.
  • reaction signals can be assigned to a subsection of a section.
  • a subsection can, for example, contain several sentences or exactly one sentence or half-sentence of a text or dialogue or, in the case of a film, a scene or dialogue sequence, or a specific period of time, e.g. B. include a second.
  • Sub-sections can (like the sections themselves) be selected based on content, duration and/or quantity.
  • reaction signals preferably based on unconscious and conscious reactions and/or assessments
  • a consistency check can be carried out using these different information sources if, as described above, they can be assigned to one and the same subsection and are therefore indirectly related to one another.
  • At least one unconsciously generated reaction signal is preferably taken into account in the consistency check.
  • This consistency check is carried out in particular between consciously and unconsciously generated evaluation data.
  • Such a consistency check makes it possible to check the quality and realism of the reactions and/or assessments. For example, it can happen that a test person would like to collect a fee for the test, but does not feel like doing the actual test process and therefore carries out the test inattentively and without interest. That of this one
  • the test result generated by the person may be worthless due to that person's disinterest or even negatively affect the reliability of the overall result if the test subject enters arbitrary ratings or intentionally false ratings.
  • reaction signals generated by unconscious behavior which include physiological or biometric data, for example, are recorded and these contradict conscious reactions recorded at the same time or related to the same product sections, it can be concluded that the test person was not involved and therefore their test result is better is not taken into account.
  • the respondent may also be distracted during the test, such that the respondent may fail to provide a usable response or assessment only at times, for example, when receiving a single section or a few sections of the preview.
  • Such a temporary failure or disruptive effect can also be detected with the help of a consistency check.
  • the assessment of the sections in question can be neglected in an overall assessment, while sections that were received by the test person without distraction, or their assessment, are included in the final result. In this way, a partial usability of a test process can also be precisely identified and taken into account in the evaluation.
  • the preliminary film product includes at least one of the following types of reproduction:
  • a text reproduction allows a test person to also capture details of the preliminary overview at individual reading speed and to quickly submit an intuitive as well as a detailed and well-considered, purely subjective, evaluation.
  • this type of reproduction is particularly suitable for an early phase of film production, in which a preliminary film product is usually only available in the form of a written summary, such as a synopsis, a treatment or a screenplay.
  • An audio version of a film for example in the form of an audio book or text-to-speech playback, enables a particularly time-saving reception of a film material, whereby the test person can also pursue a parallel job at the same time.
  • the test subject can simultaneously be traveling on a train, doing endurance sports, or doing household chores, such as cleaning, ironing, or cooking, so that he or she does not have to sacrifice valuable time exclusively for the testing process of the film. Since in this variant the test person is at least temporarily likely to be distracted by their main activity, the consistency check described can be used particularly advantageously here to check whether the test person is currently distracted or is paying enough attention.
  • a film presentation enables a scenario comparable to the end product when played back or received by the test person, so that the test person’s reaction signals and the evaluation results based on them are more realistic and reliable than when playing a text or audio version .
  • Such a film representation can, for example, include dialogues and noises or music stored in a graphic storyboard.
  • the film representation can also be realized by an animated film created using a game engine.
  • the film representation can also include a rough cut version of the film.
  • an input request signal a so-called prompt signal
  • a field with a gradual color gradient or multiple color bars can be displayed.
  • the color bars can be displayed on a screen at the end of each section and can be selected by the user, i.e. the subject, using a pointing device such as a mouse.
  • Such a section can include, for example, text, an audio excerpt, a graphic, in particular a moving graphic or a film sequence or combinations thereof, which reproduce dialogues and/or an action.
  • text In the case of text, it typically has a plurality of lines to about one or two pages of text with no page breaks between pages. From a certain minimum length, the text can be displayed as scrollable text, so that the reader can work his way to the end of the text of the section without having to turn the pages.
  • the reader or recipient switches to the next section with an input, such as a click.
  • an input prompt is displayed with a button that has a gradual color gradient or several differently colored buttons.
  • buttons can also include other distinguishing features such as different textures, symbols or the like.
  • the user can both switch to the next section with a single click and at the same time submit a rating by selecting the colored area at a point with a specific color or, in the case of a representation of discrete differently colored or textured buttons, selecting one of these and thus giving an evaluation of the section that has just been read or received.
  • a specific color or texture is linked to a specific value of an evaluation. Only when the user or test person has made their choice, for example by clicking, is the next section automatically displayed, so that the test person can evaluate each individual section.
  • the test person is shown a button with only three colors, for example green, yellow and red.
  • Green means e.g. E.g. that the respondent liked the section, yellow means that the section can still be improved or was not fully understood, and red means that the respondent did not like the section.
  • a button for the evaluation can be displayed with a continuous color gradient, for example using the three colors mentioned.
  • the test person can then click on any area and in this way give a finely graded rating. For example, any rating between 0% agreement and 100% agreement can be given for a section.
  • a text field in which a free text comment can be entered can be displayed to the test person in response to a predetermined input signal, for example a long click.
  • the two different types of assessment can also be combined with each other.
  • the input signal can also be included in the evaluation as a reaction signal. This means that the test person making a comment at all means that a section is particularly important to them.
  • the content of the text can also be evaluated for an evaluation.
  • individual sentences or half-sentences of a section can also be marked, in particular marked in color or with a texture, in order to evaluate them.
  • a field for an overall assessment of the preview by graph representation may also be displayed in the prompt embodied by a prompt signal.
  • a test person creates a graphic curve that illustrates an approval value as a function of the current position in the text or the film location, for example as a function of a section number and a line number, a time specification or a key scene.
  • the agreement value can vary from 0% agreement to 100% agreement, for example, so that a very differentiated evaluation is possible.
  • an acoustic interface can also be implemented for an evaluation carried out by voice input.
  • z. B. also initially an acoustic prompt signal can be generated.
  • An evaluation made by voice input can include an oral specification of a scale value and/or a free text comment.
  • a stop command can also be given, for example, by the test person using a keyword, which is recognized with the aid of speech recognition. The acoustic performance is then stopped and the test person has the opportunity to evaluate the sentence or half-sentence just heard.
  • a free text comment can then optionally be added to give a producer or author additional valuable information for improving the section or passage. Specifying a scale value requires little time, while a free text comment can reflect specific individual impressions and reactions of a test person.
  • a condition sensor signal that includes information related to an unconscious score may include a reading speed measurement, a scroll tracking score, a heart rate reading, or an eye movement.
  • a pulse value provides information about the state of arousal of a person.
  • a high pulse value can, for example, provide an indication that a test person finds a section currently being received very exciting.
  • a low heart rate indicates that the test person is bored at the reception.
  • an eye movement can also reflect an emotional sympathy of the test person.
  • a low reading speed tends to indicate a boring section
  • a high reading speed indicates that the subject is captivated by the content received.
  • the ratio of a currently measured reading speed to a test person's normal speed is particularly relevant here. If a reading speed is significantly lower than the normal or average speed of the person concerned, this can also indicate a text that is difficult to understand. Accelerating the reading process, on the other hand, suggests an exciting passage, while slowing down the flow of reading suggests a boring passage. It cannot be ruled out that the deductions from the behavior of the test person could also be contrary to the previous description.
  • a relation between the reading behavior of the test person or the target group, the the test person belongs to, and the interest of the test person or the target group can be determined.
  • the relationships determined during the training process for example using a larger number of test films or test scripts, can be stored as labeled training data.
  • a neural network or another system with artificial intelligence can be generated with the labeled training data.
  • the labeled training data has a number of measured variables, such as the reading speed and a braking or acceleration value of the reading behavior.
  • the labeled training data have parameter values for individual emotions or mental states, such as values for boredom, interest, understanding, etc., as output vectors. If the time required for the test person is to be kept low, input signals and state sensor signals recorded during the actual test can also be evaluated and, so to speak, in real time to determine the sought-after connection between the reading speed or the reading behavior and an implicit evaluation of the test person, preferably again with the help of a neural network or another system with artificial intelligence (abbreviated to Kl) can be used. In this version, the training data is generated more or less during operation, so that the neural network or the AI system is constantly being improved over the course of a test process.
  • Kl artificial intelligence
  • slow scrolling can indicate a slow reading speed and faster scrolling can indicate an increased reading speed.
  • scrolling up to reread a paragraph or subparagraph may indicate interest in a text or a particular paragraph. It could also allow conclusions to be drawn about a passage that is difficult to understand.
  • the patterns mentioned can be correlated with evaluation variables with the aid of artificial intelligence. For example, values of measures of the subject's unconscious behavior are correlated to a scale of rating values.
  • the assessment values can represent, for example, a measure of interest, approval, perceived excitement or friendship, or the like.
  • the behavioral patterns of the test person or a target group can be determined in a training process and used for an input vector.
  • Such an input vector can include, for example, a scrolling speed and a scrolling direction as parameters.
  • Parameter values for the extent of interest, agreement, perceived tension and curiosity can be determined as an output vector for a neural network.
  • the test person explicitly states their emotions, for example, so that the training data can serve as a reference. After the neural network has been trained with the training data, it can then independently determine the emotions of a test person, the target group or a very specific test person.
  • individual sections (assigned to the preliminary film product) that have received an unfavorable rating according to a predefined rating scale are changed or sorted out in the step of creating the film product.
  • the evaluation scale can be z. B. be defined in advance with regard to the intended effects. So For example, a scene that induces too much fear may be rated as poor in a children's film, whereas in an adult horror film that effect would be the intended effect.
  • the preliminary film product and thus also the later film product can be adjusted on an objective basis without the time-consuming consultation of experts in a simple manner and without great effort, in order to improve the prospects of success with the target group.
  • the target group of a film product can also be determined more precisely, from which further optimizations can then be derived.
  • Changing a section can, for example, also include specifying a suitable viewing angle for imaging a section.
  • changes can also be made in the representation or pictorial recording of a scene in order to increase the attractiveness of a film segment for the target group.
  • the different ratings of a test person based on input signals and state sensor signals are combined to form an overall rating of an entire text (or any other type of preliminary film product).
  • the overall score may comprise a plot of one or more score curves versus time or position in the text (or any other type of precursor film).
  • the individual evaluation curves can be marked or colored differently, for example.
  • a single evaluation curve determined by a weighted combination, for example addition, of the different evaluations that have arisen can also be generated in the overall evaluation for a test person.
  • the overall assessment can then include assessment curves from a number of test persons, which are displayed “superimposed” in a common diagram. With this variant, contradictory user opinions can be presented clearly.
  • Test results from a potential test subject can also be compared to target group-specific reference values in order to determine whether a potential test subject should be selected for a future test or series of tests. If you want to weight the ratings of the different test subjects realistically, it is advantageous to know the quality of the ratings of the individual test subjects, what taste and affinity the test subjects have for individual genres of film art. With the latter criteria, the extent to which the test person differs from the target group is particularly relevant.
  • test person's rating will be given a correspondingly low weighting.
  • the quality of an assessment can be found out, for example, by an individual evaluation of an assessment by a test person from a number, for example 100, reference films.
  • the test person's ratings of the reference films can be compared, for example, with the ratings of the target group or one or more qualified evaluators.
  • the weighting of the evaluation of the test person then depends on the similarity of the evaluations of the test person and the evaluating persons or the target group.
  • the weighting can also be based on detailed text-based evaluations of the reference films by the test subjects.
  • experts for example producers, assess the quality of the detailed evaluations and can thus determine a weighting in a particularly well-founded manner, albeit with the corresponding additional effort.
  • FIG. 1 shows a flowchart which illustrates a method for checking a screenplay according to an embodiment of the invention
  • FIG. 2 shows a schematic representation of a visual evaluation of a screenplay according to an embodiment of the invention
  • FIG. 3 shows a schematic representation of a line-by-line evaluation of a screenplay according to an embodiment of the invention
  • Figure 4 shows an example diagram that quantifies the approval of a test person
  • Figure 5 shows an example diagram which illustrates a test person's reading speed
  • FIG. 6 shows a schematic representation of an evaluation of a screenplay based on an audio transmission
  • FIG. 7 shows a schematic representation of an evaluation of a film prototype
  • Figure 8 example graphs to illustrate the relationship between voltage values and physiological measurements
  • FIG. 9 shows an exemplary diagram to illustrate physiological variables measured on test persons during a screening of a preliminary film product
  • FIG. 10 shows a diagram with voltage values determined on the basis of the physiological variables shown in FIG. 9,
  • FIG. 11 shows a schematic representation of a preliminary film product testing device according to an exemplary embodiment of the invention
  • FIG. 12 shows a schematic representation of an evaluation device of a preliminary film product testing device according to an exemplary embodiment of the invention
  • FIG. 13 shows a schematic representation of input vectors and evaluation vectors for a Kl-based evaluation of a preliminary film product according to an exemplary embodiment of the invention
  • FIG. 14 shows a software kit for implementing the method according to the invention for testing a preliminary film product.
  • FIG. 1 shows a flow chart 100 which describes a method for correcting a preliminary film product.
  • a treatment T ie a short summary of a film plot
  • This summary is illustrated in Figure 1 Embodiment used as a film precursor FVP.
  • the content of the treatment T is displayed as text on a display unit, for example a computer screen.
  • Sections A of the summary are displayed separately to a plurality of test persons.
  • a section can, for example, comprise one screen page, but also more or less than one screen page.
  • the test person can move back and forth in the text by scrolling. i.e. the division of the preliminary film is done here in connection with the presentation of the text.
  • the subjects then perform an assessment B by inputting assessment information at step 1.111 after each reading of a section.
  • they control one of the control panels (shaded differently) shown in FIG. 2 at the bottom of the screen in order to provide an assessment of the quality of the section.
  • dotted hatching can mean that the section of the evaluating person is completely dissatisfied.
  • Hatching drawn with vertical dashes can mean that the section either needs improvement but contains usable approaches or that the section is not understandable (enough), and hatching drawn with horizontal dashes (see box 3c in FIG. 2) means that the evaluating test person likes the read section.
  • alternative markings can of course also be used for the individual control panels, such as different colors or the like.
  • Step 1.1V indicates that the two previous steps 1.11 and 1.111 are repeated until the text is finished. In other words, if the test person has not yet reached the end of the text, they return to step 1.11. If in step 1.1V the last section of the text has been evaluated, a transition is made to step 1.V.
  • an overall evaluation GA of the evaluations carried out by a plurality of test persons then takes place.
  • the weightings can be determined, for example, on the basis of a test phase that takes place in advance or a learning phase with the aid of a machine learning method.
  • a test person is given a plurality of test scripts for evaluation. It is then determined which before "Quality” has the rating.
  • the “quality” can be determined, for example, based on the later actual success of the film based on the test script. This means that if the test person's assessment agrees with an assessment of the actual audience, a high-quality assessment can be assumed. Further indications for a weighting are whether the test person has a similar taste to the target group or has a fundamentally positive attitude towards the topic of a film.
  • step 1.VI Based on the section-by-section evaluation of the summary, a concrete statement can now be made about individual sections of the later film product and its overall quality in step 1.VI. In this way, the chances of success of the film product can already be estimated in an early phase of film production and in this way the risk of an unnecessary use of resources in film production can be reduced. Furthermore, in step 1.VI, corrective measures are carried out on the basis of the evaluation results in order to improve the preliminary film product and thus the chances of success of the later film product.
  • FIG. 2 shows a schematic representation of a visual evaluation of a screenplay according to an exemplary embodiment of the invention.
  • a screen 1 is shown on which a portion of text of a synopsis of a film having a plurality of lines 2 is displayed.
  • An evaluating test person reads the section and then uses a mouse to select one of the three differently colored fields 3a, 3b, 3c in a rectangular evaluation field 3 displayed at the lower edge of the screen in order to give an evaluation of the read section.
  • the different colors of the individual fields 3a, 3b, 3c are symbolized by different textures.
  • the next section is then automatically displayed, which the assessing test person reads through again. After reading the next section, she gives a rating again, and so on.
  • FIG. 3 shows a schematic representation of a line-by-line evaluation of a screenplay according to an exemplary embodiment of the invention. Similar to the exemplary embodiment shown in FIG. 2, a section of a summary of a preliminary form of a screenplay with a plurality of lines 2, 2a, 2b is displayed on a screen 1. In contrast to the exemplary embodiment shown in FIG. 2, in the exemplary embodiment shown in FIG. 3, individual lines 2a, 2b are selected and evaluated using a colored marking. For example, the third row 2a was made with a color, which is symbolized by a dotted area 4a in FIG. 3, and thus given an average rating, which means that the content already shows promising approaches, but still needs improvement.
  • a fifth row 2b is marked with a white color bar 4b, which means that the evaluating person likes this row very much.
  • a summary can be evaluated very finely or very localized by further subdividing the sections into subsections (namely here in individual lines), so that the authors receive very precise feedback and can make local corrections more effectively Screenplay can make than with a section-by-section evaluation (without further subdivision into sections), as was realized in the embodiment shown in Figure 2.
  • FIG. 4 shows a diagram 5 which represents an evaluation curve 5a.
  • a value G is represented as a function of a line index z, which specifies a line number of a section or subsection.
  • the G value represents the extent to which a rater liked a section A, subsection or sentence. After or while reading the section or after or while reading an entire text, the evaluator draws a curve that gives information about which subsections of the sections they liked better or worse.
  • the value G can be entered, for example, on a tablet using a light pen or by touching a screen of the tablet designed as a touchscreen with a user finger in an evaluation field displayed on the screen of the tablet for the evaluation.
  • FIG. 5 shows a graph 6 whose curve 6a, unlike curve 5a shown in FIG. 4, was recorded automatically while reading through a script.
  • a reading speed V of a reader is determined as a function of the line or line number z. This reading speed can be determined, for example, on the basis of a time interval between the ratings of the individual sections or lines. It can also be determined using a scrolling speed of the evaluating person.
  • an average reading speed Vm of the assessing test person can be determined in advance using test material. In the sections or lines z in which the test person's reading speed v exceeds their average speed Vm, it can be assumed that the reader particularly likes these sections or that the reader finds these sections particularly exciting.
  • an evaluation factor can be determined which indicates the degree of agreement of the reader with a part of a section.
  • an automated evaluation is obtained, which can be carried out, for example, in addition to an active evaluation, as illustrated in FIG. 2 to FIG. 4, or can also be used as an alternative if the reader does not actively cooperate should be charged.
  • This graph can also be created automatically through click evaluations.
  • test person 8 can now make acoustic evaluations of individual sections through corresponding verbal inputs.
  • the playback device 9 can have a text-to-speech-to-text module, which can convert speech into text and vice versa. If the text is now read out, the user is asked for verbal feedback at the end of each section. For example, test person 8 gives a value on a scale from 0 to 10. After the spoken number has been delivered, the system skips to reading the next section. Similarly, an oral command to stop after a single sentence can also be given, so that individual sentences that have just been heard can also be evaluated.
  • a spoken free text comment can be recorded at the end of a paragraph or an individual sentence, with which the respective paragraph or individual sentence is evaluated.
  • FIG. 6 also shows a button 41 which the test person can press for a conscious evaluation.
  • the test person wears a pulse sensor 42 on his arm, with which status sensor data ZS are obtained, which represent information for an unconscious evaluation.
  • the status sensor data ZS and the deliberate evaluation data E are transmitted to a preliminary film product testing device 20 (see FIG 8 for details), which carries out an evaluation on the basis of the recorded data E, ZS and also controls the playback device 9 with the aid of control signals S in order to start playback with to synchronize the evaluation action of the test person.
  • FIG. 7 shows a schematic representation of a display device 10 of a preliminary film product testing device for a film prototype.
  • the film prototype is displayed on a screen 10a of the display device 10 of the evaluating subject (not shown) demonstrated.
  • An acoustic reproduction takes place through two loudspeakers 10b arranged on the sides of the screen 10a.
  • the evaluating test person looks at a scene 11 and selects a colored sub-field 3a, 3b, 3c or one shown hatched in FIG. This evaluation can also be initiated via an external input unit.
  • an acoustic evaluation similar to the exemplary embodiment shown in FIG. 6 is also possible.
  • reactions of the assessing test person are captured in images.
  • the recorded eye movements are included in the evaluation of the film prototype. For example, based on the speed of an eye movement, a state of tension of the assessing test person can be determined and in this way the effect of the film on the assessing test person can be determined.
  • the heart rate of the assessing test person can also be measured here while the film prototype is being played and, based on the pulse frequency, it can be concluded how exciting the assessing test person thinks a film scene is.
  • Other measured values can include body temperature and skin moisture, which can be measured, for example, via the electrical resistance of the skin.
  • the measurements of the physiology of the evaluating subject can be evaluated, for example, by comparison with reference values.
  • the evaluating test person is shown reference films in advance, the scenes of which the person explicitly evaluates.
  • the physiological variables such as eye movement, pulse rate, body temperature and skin moisture are measured and related to the person's subjective assessment of the scenes.
  • a mathematical connection between the perceived tension and the recorded physiological parameters of the test person can be determined, which can then be used for the evaluation of a film prototype.
  • the different test variables and associated partial results are combined again to form an overall assessment result.
  • Such a combination can include, for example, a weighted addition of the individual evaluation variables.
  • FIGS Represent a person, illustrate with physiological measurements.
  • physiological measured values such as eye movement Fa, pulse frequency Fp and body temperature Tk
  • ESR ESR of a tension sensation in a diagram.
  • the values given are only exemplary and are not intended to have any limiting meaning.
  • ESR reference values can, for example, be recorded once for each test person and stored in a database.
  • FIG. 9 the measured values of the above-mentioned physiological variables of eye movement Fa, pulse frequency Fp and body temperature Tk of a test person are shown graphically in a graph as a function of a line number Z.
  • the measured values Fa, Fp, Tk shown in FIG. 9 are now measured in front of a test person during a presentation of a preliminary film product.
  • FIG. 11 shows a schematic representation of a preliminary film product testing device 20 according to an exemplary embodiment of the invention.
  • the preliminary film product checking device 20 is connected to a display device 10 .
  • the display device 10 includes a screen (see FIGS. 2, 3) with which a preliminary film product can be displayed for a selected group of test subjects and which has a touchscreen function in order to receive inputs E from a test subject.
  • the preliminary film product checking device 20 has a structuring unit 21 for dividing the preliminary film product into individual sections. Structuring signals S are transmitted from the structuring unit 21 to the display device 10 in order to structure the displayed preliminary film product into individual sections.
  • Part of the preliminary film product testing device 20 is also a detection unit 22 for partially detecting reactions and/or assessments of the test persons during the reception of the preliminary film product by the test persons.
  • the detection unit 22 thus receives input signals E from the display device 10 and status sensor signals ZS from status sensors (not shown), which contain the status information about the test person already mentioned to capture.
  • the input signals E and status sensor signals ZS recorded by the detection unit 22, which include information about the reactions and assessments of the test person, are evaluated by an evaluation unit 23, with an evaluation B of the individual sections being carried out on the basis of the recorded reactions and/or assessments of the test persons 8.
  • the evaluation data B are transmitted to an overall evaluation unit 23a, which is also part of the preliminary film product testing device 20 in the exemplary embodiment shown.
  • the overall evaluation unit 23a generates an overall result through a weighted combination of the individual evaluations B.
  • the weighting can be based on consistency values, for example, which were also included in the calculation of the evaluations and can be part of the evaluation results B and allow a statement to be made as to whether the unconscious Evaluations ZS and the conscious inputs E of a test person are consistent. The more consistent this rating information is, the more heavily an individual rating can be weighted.
  • Other influencing factors can also be included in the weighting, such as a value for the test person's competence or the similarity of the test person's views with a target group.
  • Part of the preliminary film product testing device 20 is also a control unit 43 in order to communicate with the individual units 21, 22, 23, 23a of the preliminary film product testing device 20 and to control them.
  • the overall assessment result is transmitted to a correction unit 24, which is used to modify the preliminary film product depending on the assessment.
  • FIG. 12 shows an evaluation unit 23 of a preliminary film product testing device according to an exemplary embodiment of the invention, as shown in FIG. 11, for example.
  • the evaluation unit 23 has a data input interface 25 with which physiological measured values ZS and input signals E are received.
  • the K1 evaluation unit 26 has a neural network unit 27 which, based on an input vector V1 of the physiological measured values ZS and the input signals E, determines an evaluation vector B which includes information about an evaluation of a preliminary film product.
  • the input vector V1 of the measurements includes a large number of measured values, such as the reading speed V, the heart rate P, the skin color H, skin moisture HF, the speed V of eye movements, the scrolling speed VS, a spectrum S of the acoustic utterances of a test subject, etc.
  • Such an input vector V1 is shown in FIG.
  • the evaluation vector B which is also shown in FIG. 13, includes, for example, values for the individual sections A1 . . . A100 or line sections on a scale from 1 to 10, with 1 standing for very negative and 10 for excellent.
  • the AI evaluation unit 26 has a database
  • the neural network for the neural network unit 27 is trained with the aid of the training unit 29 .
  • labeled training data can be used. These include reference data from pairs of input vectors V1 and evaluation vectors B, which e.g. B. were also generated using the test persons or the target group based on test films or corresponding film pre-products.
  • the structures of the neural network, such as connections and weights, are then modified such that an associated score vector B of the reference data is generated by the neural network in response to an input vector of the reference data.
  • the trained neural network is then used by the neural network unit 27, for example, to evaluate the measurement vector of emotional measurement values of a test person.
  • the training session trains
  • the database 28 can also be in the form of an external database and can be connected to the training unit 29 via a data network.
  • the training unit 29 could also be arranged externally, and the trained neural network is z. B. transferred to the AI evaluation unit 26 via a data network.
  • the assessment B by the AI assessment unit 26, in particular the neural network unit 27, can also include a consistency check. In this case, it is checked whether the values of the status signals ZS are consistent with the inputs E generated by deliberate evaluation. If, for example, a state ZS associated with a positive emotion is measured and the conscious evaluation E shows a negative attitude, the relevant evaluation of the current section can either be completely rejected due to the inconsistency as a precaution or at least be weighted less.
  • the evaluation unit 23 can also have other units, for example a text analysis unit, with which text inputs are analyzed for their meaning and a quantitative evaluation variable is generated on the basis of the meaning
  • the software 30 includes a server program 31 running on a computer, e.g. B. a film production company, or stored in the cloud.
  • the software 30 also includes an app 32, which is stored on the test persons' smartphones. If a preliminary film product is now to be tested using the method according to the invention, the preliminary film product FVP is transmitted to the test persons' smartphones with the aid of the server program 31 .
  • the test subjects can now use the app 32 to watch or listen to the preliminary film product FVP and generate reactions with corresponding status signals ZS and assessments with corresponding input signals E.
  • the app 32 has program parts that are used to control sensors on the smartphone to record the reactions and assessments. For example, the app 32 may provide a scroll tracking capability.
  • the app 32 uses a camera to record eye movements or the color of the test person's skin.
  • a camera integrated into the test person's smartphone can be used as a camera for recording the physiology of a test person.
  • the app 32 can also include capabilities for measuring the reading speed, for example based on eye movements or the scrolling speed or scrolling behavior.
  • the app 32 can have the ability to assign measurement times and/or input times to the recorded reactions and/or assessments.
  • a signal ZS associated with a reaction is measured or when an input E is recorded, which is recorded as part of a transmission of assessment information from the test person to the app 32
  • the times of the measurement or input are recorded, and measurement events recorded at the same time or very close together in terms of time are assigned to a specific sub-section or action item of a section.
  • devices that happen to be in the test person and are equipped with sensors for measuring biometric data can also be integrated into the test process.
  • a sports watch or smartwatch can be integrated into the test process and a pulse rate or skin temperature can be measured with it.
  • the app 32 preferably converts the detected signals into measured values or input texts.
  • the server program 31 accepts the measured values and input texts determined by the app 32 as part of the assessments and reactions of the test subjects and carries out an evaluation based on the transmitted information in order to generate evaluation result data B for the individual sections of the preliminary overview. During the evaluation, the reactions and assessments generated by the individual test persons are combined to form an evaluation result for the individual sections.
  • the server program 31 also carries out further processing steps, such as determining an overall test result and correcting the preliminary overview, the preliminary film product and the film product in order to improve the chances of the film production being successful.

Landscapes

  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Operations Research (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Marketing (AREA)
  • Game Theory and Decision Science (AREA)
  • Educational Technology (AREA)
  • Medical Informatics (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Developmental Disabilities (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Polymers With Sulfur, Phosphorus Or Metals In The Main Chain (AREA)

Abstract

L'invention concerne un procédé de test d'un précurseur de film (FVP). Selon le procédé, un précurseur de film (FVP) est fourni à un groupe sélectionné de personnes de test (8). Pendant ou directement après la réception du précurseur de film (FVP) par les personnes de test (8), des signaux de réaction (E, ZS) provenant des personnes de test (8) sont détectés dans des sections. Une analyse automatique des signaux de réaction détectés (E, ZS) se produit pour générer des données de résultat d'analyse (BE) pour les sections individuelles (A). Un résultat de test global (GA) du précurseur de film (FVP) est également éventuellement déterminé en fonction des données de résultat d'analyse (B). L'invention se rapporte en outre à un procédé de correction de précurseur de film (FVP). L'invention concerne par ailleurs un dispositif de test de précurseur de film (20). L'invention concerne en outre un dispositif de correction de précurseur de film.
EP21772776.7A 2020-09-09 2021-09-02 Test d'un précurseur de film Pending EP4211629A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102020123554.2A DE102020123554A1 (de) 2020-09-09 2020-09-09 Prüfung eines Filmvorprodukts
PCT/EP2021/074311 WO2022053399A1 (fr) 2020-09-09 2021-09-02 Test d'un précurseur de film

Publications (1)

Publication Number Publication Date
EP4211629A1 true EP4211629A1 (fr) 2023-07-19

Family

ID=77801722

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21772776.7A Pending EP4211629A1 (fr) 2020-09-09 2021-09-02 Test d'un précurseur de film

Country Status (5)

Country Link
US (1) US20230297925A1 (fr)
EP (1) EP4211629A1 (fr)
AU (1) AU2021340180A1 (fr)
DE (1) DE102020123554A1 (fr)
WO (1) WO2022053399A1 (fr)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8782681B2 (en) * 2007-03-08 2014-07-15 The Nielsen Company (Us), Llc Method and system for rating media and events in media based on physiological data
WO2013188656A1 (fr) 2012-06-14 2013-12-19 Thomson Licensing Procédé, appareil et système servant à déterminer la réaction d'un spectateur à des éléments de contenu
GB2539949A (en) 2015-07-02 2017-01-04 Xovia Ltd Wearable Devices
US20170308931A1 (en) 2016-04-20 2017-10-26 CineMob, Inc. Enhanced viewer-directed motion picture screening
US10783443B2 (en) 2017-03-08 2020-09-22 International Business Machines Corporation Real-time analysis of predictive audience feedback during content creation
JP6749278B2 (ja) 2017-04-14 2020-09-02 ダイキン工業株式会社 生理状態判定装置

Also Published As

Publication number Publication date
AU2021340180A1 (en) 2023-03-09
US20230297925A1 (en) 2023-09-21
DE102020123554A1 (de) 2022-03-10
WO2022053399A1 (fr) 2022-03-17

Similar Documents

Publication Publication Date Title
DE60014063T2 (de) Vorrichtung und verfahren zur erkennung von gefühlen in der menschlichen stimme
Langer et al. Highly automated interviews: Applicant reactions and the organizational context
Constant et al. Ethnicity, job search and labor market reintegration of the unemployed
CN107456208A (zh) 多模式交互的言语语言功能障碍评估系统与方法
Bucy et al. Image bite analysis of presidential debates
DE112021006096T5 (de) Verändern der benutzeroberfläche einer anwendung während einer aufzeichnungssitzung
DE102014118075A1 (de) Audio und Video synchronisierendes Wahrnehmungsmodell
Stone et al. Induced forgetting and reduced confidence in our personal past? The consequences of selectively retrieving emotional autobiographical memories
Schuck et al. News framing effects and emotions
DE212016000292U1 (de) System zur Text-zu-Sprache-Leistungsbewertung
Brossart et al. Assessing group process: An illustration using tuckerized growth curves.
Bahreini et al. Communication skills training exploiting multimodal emotion recognition
DE69911054T2 (de) Trainingsgerät und -verfahren mit einem simulator der menschlichen interaktion
EP1345110A2 (fr) Système pour adapter une interface homme-machine en fonction du profil psychologique et de la sensibilité momentanée d'un utilisateur
Silbey Designing qualitative research projects
EP4211629A1 (fr) Test d'un précurseur de film
DE202023102984U1 (de) Aufforderung von maschinengelernten Modellen mit Hilfe von Gedankenketten
DE112017007900T5 (de) Systeme und verfahren zur erzeugung von daten natürlicher sprache
DE102007063136B4 (de) Interaktives Gehörtraining
Manes‐Rossi et al. Skeptic, enthusiast, guarantor or believer? Public managers' perception of participatory budgeting
WO2015140119A1 (fr) Système et procédé de transmission de contenus pédagogiques basé sur un réseau
McPherson et al. Counting qualitative data
Seufert et al. Digital competences
Tan Social cultural and situative perspective of studying emotions in teaching and learning: characteristics, challenges and opportunities
Kachel Social markers of sexual orientation and gender in speech and appearance: a combination of producer-and perceiver-centered approaches

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20230224

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20240216