EP3596688A1 - Method for enriching a digital content with spontaneous data - Google Patents
Method for enriching a digital content with spontaneous dataInfo
- Publication number
- EP3596688A1 EP3596688A1 EP18713334.3A EP18713334A EP3596688A1 EP 3596688 A1 EP3596688 A1 EP 3596688A1 EP 18713334 A EP18713334 A EP 18713334A EP 3596688 A1 EP3596688 A1 EP 3596688A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- content
- perception data
- capture
- associating
- defined according
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 230000002269 spontaneous effect Effects 0.000 title description 4
- 230000008447 perception Effects 0.000 claims abstract description 55
- 238000013475 authorization Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 5
- 230000017105 transposition Effects 0.000 claims description 4
- 230000004913 activation Effects 0.000 description 7
- 230000008451 emotion Effects 0.000 description 6
- 230000002996 emotional effect Effects 0.000 description 6
- 230000014509 gene expression Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000001514 detection method Methods 0.000 description 5
- 230000015654 memory Effects 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 230000009849 deactivation Effects 0.000 description 2
- 238000012217 deletion Methods 0.000 description 2
- 230000037430 deletion Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 230000003936 working memory Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 230000008909 emotion recognition Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 238000004377 microelectronic Methods 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 238000012797 qualification Methods 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- GEHJYWRUCIMESM-UHFFFAOYSA-L sodium sulphite Substances [Na+].[Na+].[O-]S([O-])=O GEHJYWRUCIMESM-UHFFFAOYSA-L 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0201—Market modelling; Market analysis; Collecting market data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/10—Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/01—Social networking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H60/00—Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
- H04H60/29—Arrangements for monitoring broadcast services or broadcast-related services
- H04H60/33—Arrangements for monitoring the users' behaviour or opinions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/52—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
Definitions
- the field of the invention is that of digital contents such as images, photos, videos or audio recordings, and in particular digital data characterizing these contents, entered manually or automatically.
- digital data are for example metadata, tags, etc.
- the communication network used by devices such as computers, mobile phones, tablets, etc. to exchange, store or view / view these contents and data characterizing these contents is any; for example it can be an Internet network, a local network, etc.
- the invention has applications in the field of digital content management, services on prospecting in the field of consumption, etc.
- TECHNOLOGICAL BACKGROUND Millions of digital content are created and shared every second by millions of users around the world, in particular through connected mobile devices incorporating a device for capturing photos or videos.
- each digital content constitutes a clean course over time, and during this course, this content will generally be consulted / visualized by several observers, and even several times by some of these observers.
- the exchanges between a sender and a recipient of a content about the emotions caused by a photo is usually done in a declarative fashion. Indeed, any recipient can be invited to return their impressions in a declarative and asynchronous, for example by placing a "like" on the Facebook service, sending a "smiley" through an instant messaging service, calling the sender to exchange on the phone, etc.
- the owner (or issuer) of digital content does not have a solution to constitute an "emotional history" of the digital contents that he stores, that he consults / visualizes, and / or that he shares with his entourage or with a wider audience.
- the invention offers a solution that does not have the drawbacks of the state of the art.
- a processor for associating perception data of a first digital content, the method implemented by a processor, comprising:
- the method implements an enrichment of the data characterizing a digital content.
- the method described here applies to opportunistically collecting data called data perception of a digital content, derived from the analysis of expressions of different observers of this content.
- observer is meant any person enjoying the position of potential commentator of a content; for example, in the case of a photo or video, the viewing position of a content is generally in front of the display device of this content.
- image or “photo” will be used interchangeably to qualify one of the types of the first or second digital content considered in the method described herein.
- the emotional background of a photo builds an emotional dimension to this photo by assessing the qualification and the quantification of the perception data of its successive and / or simultaneous observers.
- emotional history of a content is understood a set of perception data generated by a content, especially those emotions spontaneously expressed by its observers.
- the method offers the advantage of allowing a content to be characterized by data relating to the expressions they trigger in observers and thus to be identifiable as such through industrial applications.
- This emotional history associated with the content brings to it an additional dimension that is built over time.
- the analysis of distinctive elements of the second content includes an identification of at least one observer.
- This mode of implementation allows an allocation of perception data collected over time according to the identity of the observers captured by the second content.
- the evolution of the perception data is carried out relatively to each identification.
- the same observer who may for example be the owner of the photo, consults the photo at different times, the enrichment of the perception data is fed over time.
- the identification by the characteristics of the voice of the observer (s) is carried out.
- the perception data are contained in the first content. This mode of implementation makes it possible to integrate the perception data with the metadata of the first content.
- these data constitute characteristic data of the content, as for example the date of capture of the photo or the place of capture (GPS data).
- these perception data as such or through a redirection to them, for example by a URL link, can be integrated as metadata in an EXIF format file relating to an image file.
- the second content corresponds to one or more content type image.
- burst mode is meant a succession of image captures at time intervals, especially equal, in a limited time.
- This limited time can be defined by the action time of a user on a device called trigger of the capture device, or by an automated system activated for a few seconds only (for example from 1 to 5 seconds): the number of images generated by the capture device then depends on the time interval set between each capture.
- a capture trigger is defined by at least one capture of an image performed in a period of time that begins at the beginning of the consultation of the first content.
- This implementation mode synchronizes the activation of the image capture device with the beginning of the consultation of the first content so that a trigger opportunistically can be realized, especially from the beginning of this consultation.
- opportunistic triggering is meant one or more captures triggered through the capture device, either by the knowledge of an evaluation of the relevance of a waiting time with respect to the beginning of the consultation of the first content, in particular in the case of consulting a first image-type content, either by the identification of previously defined instants, in particular in the case of consultation of a first video-type content, or by the automatic detection of an analyzing module continuously the content of the field of view of the capture device in order to identify the distinctive elements potentially transposable into perception data.
- the timestamp data of the trigger (s) of the capture device can be entered as input parameters of the device implementing the method, for example by integrating directly or indirectly timestamps in the metadata of the content accessed.
- the second content is then a series of captures made at different times during the playback of the content accessed.
- a capture trigger is defined by a capture of audio and / or video content in a period of time that starts at the beginning. the consultation of the first content.
- This implementation mode synchronizes the activation of the sound recording or video recording device with the beginning of the consultation of the first content, in order to apply a trigger configured as "default".
- the analysis of the distinctive elements will be longer, especially given the higher volume of data generated and thus to be analyzed, than one or more image captures; however, the relevance of the perception data at the end of this analysis is estimated to be higher.
- the analysis of distinctive elements of the second content comprises a selection of at least one image of said video content.
- This variant makes it possible to search and extract from the second content, in the case where it is a video, a selection of one or more images from this video in order to analyze the distinctive elements of this or these images. This selection step then makes it possible to apply a similar analysis to a second content that would result directly from one or more image captures.
- the analysis of distinctive elements of the second content comprises a semantic analysis of said audio content.
- This other variant makes it possible to characterize the perception of observers according to keywords issued by these observers, recorded and stored.
- a semantic analysis allows a simple and relevant analysis, subject to capturing data to be analyzed ; indeed, if a face capture is often detectable, a sound capture of an expression called "oral" may be non-existent when viewing a content by an observer.
- a consultation of the first content includes a request for authorization of activation of a capture device from an observer.
- This mode of implementation allows to leave a control to the observer.
- a request to an observer to allow or not the activation of one or more capture devices is performed, for example by a program automatically executed at the opening of the content consultation application. digital.
- the authorization request for activation of a capture device is performed during the request for access to the first content made by an observer.
- This variant makes it possible to limit the number of requests to the user.
- the method queries an observer to authorize the execution of the computer program enabling activation of the capture device during the opening of the first content.
- This program can be inserted in the photo, especially in the metadata, and the consultation application is able to interpret this data (text, tag, URL link, javascript program, etc.).
- this program can include by default the automatic deactivation of the capture device (in the case where the authorization was previously accepted) when stopping the consultation of the given content or closing the consultation application. .
- a consultation of the first content also includes an authorization request to send the perception data of the first content to at least one recipient.
- This other variant also makes it possible to limit the number of requests to the observer.
- the method queries the observer to allow the sending of the perception data when it is obtained.
- a notification of the update of the perception data will be sent to one or more recipients.
- These designated recipients for example through the metadata of the first content, directly or indirectly by redirection by a storage location in the network, can thus consult the emotional history of the content.
- These recipients are generally the creators of this content but also the mere transmitters of this content.
- the second content is deleted when the association of the perception data to the first content is completed.
- This mode of implementation makes it possible to automate the release of the storage place used in the device on which the content is viewed, or in a remote storage space in the network, at the end of the implementation of the method; in particular the deletion of a second video type content.
- the invention relates to an association device comprising an application module, able to associate perception data of a first digital content, the device being characterized in that it comprises:
- a management module for triggering one or more captures from a capture device, the result of the capture being designated as the second content
- a module for transposing said distinctive elements into perception data a module for associating said perception data with the first content.
- the application module allows in particular opportunistic triggering of the capture device to obtain a second content and optimized management of the data volume to analyze the second content.
- the invention also relates to a computer program capable of being implemented on an association device, the program comprising code instructions which, when executed by a processor, carry out the method defined in this invention.
- a program can use any programming language. It can be downloaded from a communication network and / or saved on a computer-readable medium.
- the invention also relates to a computer-readable recording medium on which is recorded a computer program comprising program code instructions for performing the steps of the association method according to the invention as described.
- a recording medium may be any entity or device capable of storing the program.
- the medium may include storage means, such as a ROM, for example a CD ROM or a microelectronic circuit ROM, or a magnetic recording means, for example a USB key or a hard disk.
- such a recording medium may be a transmissible medium such as an electrical or optical signal, which may be conveyed via an electrical or optical cable, by radio or by other means, so that the program computer that it contains is executable remotely.
- the program according to the invention can in particular be downloaded to a network, for example the Internet network.
- the recording medium may be an integrated circuit in which the program is incorporated, the circuit being adapted to execute or to be used in the execution of the aforementioned display control method.
- each of the devices comprises application means such as instructions of the aforementioned computer program, these instructions being executed by physical means such as at least one processor and a working memory.
- Figure 1 illustrates an equipment incorporating the method described here.
- Figure 2 illustrates an embodiment of the method.
- FIG. 3 illustrates the main steps carried out for the implementation of an embodiment of the method of the invention.
- Figure 4 illustrates the application modules contained in the APP module.
- Figure 1 illustrates an electronic equipment mainly comprising a content viewing / viewing device, and two content capturing devices, a photo or video capture device, and an audio capture device.
- these two types of devices can be considered separately to perform their respective main function; for example, a computer having an application for viewing / viewing digital content and a webcam connected to this computer by wire connection or not.
- the electronic equipment is illustrated as a mobile device 100.
- the mobile device 100 comprises a display device 145, a CPU processor, an input device INPUT and an application module APP characterizing the method presented here. .
- the user interaction and the manipulation of the rendering of the application module on a graphical interface can be obtained by using the display device 145, which is in our example a touch screen functionally coupled to the processor CPU controlling the interface displayed.
- the input device INPUT and the display device 145 are thus merged.
- Some mobile devices 100 may also have an INPUT input device such as a keyboard.
- the CPU processor may control the rendering and / or display of the GUI on the display device 145 depending on the type of applications, native applications or third parties.
- the CPU processor can also handle user inputs according to the present method.
- the touch panel 145 may be viewed as an input device for interacting with a user's finger or other devices such as a stylus.
- the touch sensor interface or the touch panel 145 may include any suitable circuit for converting the analog signals corresponding to the touch input received on its surface into any appropriate digital touch input data. Such tactile input data may, for example, be used to make selections of parts of the graphical interface of an application.
- the input received from a user's contact is sent to the CPU.
- the touch panel 145 is configured to detect and report the location of the point of contact to the CPU, which can interpret the keys in accordance with the application and the current GUI.
- the CPU processor may initiate a task, for example a VIS application for viewing / viewing digital contents or a CAP application for capturing photos / video.
- the VIS application is for example the native application for viewing photos / videos, in particular from different types of format
- the CAP application is the native application for capturing photos.
- the application module APP can dialogue with these two applications VIS and CAP.
- the application module APP can analyze the photos captured by the CAP application.
- MEM memory includes working memory and memory in particular to store, even temporarily digital content.
- the capture device 150 makes it possible in particular to take a picture or video of the observer (s) of the method described here.
- the microphone 160 makes it possible to record the ambient sound of the scene of the observer (s).
- FIG. 2 illustrates an embodiment of the method described herein.
- a user A sends, via a communication network NET which may consist of a set of heterogeneous networks, a digital content, here a photo 210, with its mobile device 201 to a user B.
- the device 100 used by the user B is configured as described in Figure 1 and allows to perform the process steps as described in this application.
- the transmitted photo includes in its metadata, which can be read and interpreted by any visualization application, the parameterization data needed to implement the method described here.
- the service used here by the user to transmit this photo is a messaging service connecting the digital content to an electronic message.
- the digital content 210 is received by the device 100 of the user B through the step E21.
- the user B then interacts with the device 100 to access the display of the digital content 210 on the display device 145.
- the visualization application VIS uses the application module APP for the treatment of instructions dedicated to it. These instructions can be written in the metadata of the image, for example in the form of text data or as a java script program. Alternatively, a search in the network from a link or an executable can retrieve these instructions.
- the application module APP submits to the user a first request for authorization to activate the CAP capture application and, after validation, submits to the user a second authorization to send the application. notification of the update of perception data obtained after execution of the method described herein.
- the user B is described as "observer" of the photo 210.
- the APP application module via the CAP application and the capture device 150, triggers a capture of a photograph of the observer: the capture device is directed towards the observer B.
- this capturing function is performed without signaling, sound or visual, emitted by the device; indeed, a temporary deactivation of these signs set by default can be applied.
- the result of the capture is not presented automatically on the display device 145.
- the application module APP The result of this capture is then processed by the application module APP.
- an acquisition of images in burst mode leads to consider at the input of the analysis module, a series of images 220 of the observer, ordered in time.
- the burst mode is set to capture a photo from the presentation of the photo 210 on the display device 145 of the device 100 by taking a picture at regular intervals for example during the first 3 seconds of presentation.
- the capture interval between each image can be set by the device 100 or by parameter data indicated in the image 210.
- a first substep consists in launching the search for an identification, here of the observer B, from a detection and recognition module of the observer. faces and one or more database accessible by the device 100: if the identification is not made from the first image, the second image is considered, and so on. Face detection and face recognition analysis modules of the prior art are used in this method.
- a distinguishing element analysis performed by the application module APP consists in selecting an image from this series of images.
- the selected image 222 is determined as the first image constituting a "significant" transition between two chronologically ordered images.
- transition significant is meant a maximum distance between two images and ordered over time, resulting from the evaluation of one or more physical differences of the face of an observer, these differences being evaluated quantitatively by several indicators defined by a morphological analysis module faces.
- Modules for morphological analysis of emotion recognition from faces exist in the prior art and are used in this method.
- the first image constituting a significant transition is the image referenced as element 222 in FIG.
- the module for analyzing the distinctive elements of the image 222 contained in the application module APP qualifies a face expression from a database of reference expressions: joy, surprise, fear, anger, disgust, sadness , neutral state, etc.
- a transposition module contained in the application module APP transposes this expression into perception data in a given format.
- a management module contained in the application module APP adds this data to the perception data relating to the content, for example stored in a database BD placed in the network, during a step E221. Alternatively, these data are also stored in a local database of the device 100 or in the metadata of the photo 210. In this particular embodiment, the second content is deleted from the memory of the MEM device.
- a notification of the update of the perception data of this photo 210 is sent automatically, for example to the user A by the step E23.
- the address of the recipient is registered within the metadata of the photo so that the application module APP can manage the sending, for example by a message via instant messaging service.
- this database of perception data is fed by the data from the viewing episodes of the user A and / or the user B as well as the other observers of this photo 210.
- this database of perception data is fed by the data from the viewing episodes of the user A and / or the user B as well as the other observers of this photo 210.
- This database of perception data is fed by the data from the viewing episodes of the user A and / or the user B as well as the other observers of this photo 210.
- this database of perception data is fed by the data from the viewing episodes of the user A and / or the user B as well as the other observers of this photo 210.
- FIG. 3 illustrates the main steps carried out for the implementation of an embodiment of the method of the invention.
- the first step E09 consists of accessing the consultation / viewing of the first content by launching the playback of a video or an audio recording or by displaying a photo on the device
- this digital content consulting application interprets the data entered in the metadata of the photo, to determine whether or not the capture device is activated during the step E10: the user B validates the activation of the capture device, and also validates for example a given capture mode.
- the instructions stipulate the capture mode desired by the method: an image / photo, a series of photos, a video or an audio recording. Alternatively, a different capture mode can be chosen by the user B. These settings can be made by default in the settings of his device to limit repeated requests if the user B wishes.
- the triggering of the capture device in step E12 is carried out opportunistically, either in a programmed manner or according to the context (type of content displayed, user preference, return of analysis success, available storage space, etc.). ).
- the capture mode can adapt (lighting, etc.).
- the combination of modes can also be performed: recording audio and capturing a photo.
- a burst mode is launched from the launch of the consultation.
- Step E13 constitutes an analysis of the second captured content in order to extract a photo (in the case of a photo or series of photos or video) for an analysis of visual expressions, and / or a series of words (in the case of video or audio recording) for semantic analysis of recorded words.
- the identification of the observer or observers is performed prior to this analysis phase. Indeed, the prior art includes various face detection and recognition applications, especially in photo management applications.
- the method generates triplet type data (date of consultation, observer identification, spontaneous emotion) updated in one or more databases.
- the identification of the observer can be validated directly by the observer.
- automatic deletion of the content generated by the method is performed by default in order to free memory.
- the observer can prevent the automatic erasure of these data generated for the analysis, that is to say the photos or words selected for the analysis, to access this data.
- the last step E15 of the method comprises the addition of the perception data in the descriptive data of the content, in particular in the metadata of the content or in a digital file relating to this content, stored or not with said content.
- Figure 4 illustrates the application modules contained in the application module APP.
- the VIS consulting application and the CAP capture application are native or third-party applications of the mobile device 100 respectively driving the display device
- the VIS consulting application integrates the process-specific data interpretation capability integrated in the first content.
- the application module APP integrates in particular a GES management module of the method for communicating with the VIS and CAP applications.
- the application module APP integrates the management module GES, a identification module by detection and recognition of faces ID, an extraction module EXT which extracts data within the second content (one or more images, or one or more words ) from which the emotions will be analyzed by an EM emotion analysis module.
- the modules ID, EXT and / or EM can be located outside the application module APP: within the device 100 or within one or more devices located in the communication network.
- a transposition module TRANS constitutes the phase of formatting in perception data as defined by a format defined by the method, in particular according to the database formats or format of the metadata.
- the method triplet in the form is the common format recognized by the VIS application.
- An association module ADD updates the perception data within the content, in particular in the metadata of the content, or in local databases or contained in the network, such as the database BD.
- the modules can be implemented in software form (or "software”), in which case it takes the form of a program executable by a processor, or in hardware form (or “hardware"), as a application-specific integrated circuit (ASIC), a system-on-a-chip (SOC), or in the form of a combination of hardware and applications, such as an application program intended to be loaded and executed on an FPGA-type component (Field Programmable Treat Array).
- software in which case it takes the form of a program executable by a processor, or in hardware form (or “hardware"), as a application-specific integrated circuit (ASIC), a system-on-a-chip (SOC), or in the form of a combination of hardware and applications, such as an application program intended to be loaded and executed on an FPGA-type component (Field Programmable Treat Array).
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- Strategic Management (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Accounting & Taxation (AREA)
- Development Economics (AREA)
- Finance (AREA)
- General Engineering & Computer Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Economics (AREA)
- Marketing (AREA)
- Software Systems (AREA)
- General Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Signal Processing (AREA)
- Human Resources & Organizations (AREA)
- Multimedia (AREA)
- Tourism & Hospitality (AREA)
- Game Theory and Decision Science (AREA)
- Primary Health Care (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- Data Mining & Analysis (AREA)
- Technology Law (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Computer Networks & Wireless Communication (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR1752063A FR3064097A1 (en) | 2017-03-14 | 2017-03-14 | METHOD FOR ENRICHING DIGITAL CONTENT BY SPONTANEOUS DATA |
PCT/FR2018/050580 WO2018167420A1 (en) | 2017-03-14 | 2018-03-12 | Method for enriching a digital content with spontaneous data |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3596688A1 true EP3596688A1 (en) | 2020-01-22 |
Family
ID=58993034
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP18713334.3A Pending EP3596688A1 (en) | 2017-03-14 | 2018-03-12 | Method for enriching a digital content with spontaneous data |
Country Status (4)
Country | Link |
---|---|
US (1) | US11954698B2 (en) |
EP (1) | EP3596688A1 (en) |
FR (1) | FR3064097A1 (en) |
WO (1) | WO2018167420A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220254336A1 (en) * | 2019-08-12 | 2022-08-11 | 100 Brevets Pour La French Tech | Method and system for enriching digital content representative of a conversation |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7533806B1 (en) * | 1998-10-09 | 2009-05-19 | Diebold, Incorporated | Reading of image data bearing record for comparison with stored user image in authorizing automated banking machine access |
US20040001616A1 (en) * | 2002-06-27 | 2004-01-01 | Srinivas Gutta | Measurement of content ratings through vision and speech recognition |
JP5249223B2 (en) * | 2006-09-07 | 2013-07-31 | ザ プロクター アンド ギャンブル カンパニー | Methods for measuring emotional responses and preference trends |
WO2017223513A1 (en) * | 2016-06-23 | 2017-12-28 | Outernets, Inc. | Interactive content management |
-
2017
- 2017-03-14 FR FR1752063A patent/FR3064097A1/en not_active Withdrawn
-
2018
- 2018-03-12 EP EP18713334.3A patent/EP3596688A1/en active Pending
- 2018-03-12 WO PCT/FR2018/050580 patent/WO2018167420A1/en unknown
- 2018-03-12 US US16/493,170 patent/US11954698B2/en active Active
Also Published As
Publication number | Publication date |
---|---|
US20200074482A1 (en) | 2020-03-05 |
WO2018167420A1 (en) | 2018-09-20 |
FR3064097A1 (en) | 2018-09-21 |
US11954698B2 (en) | 2024-04-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10623813B2 (en) | Systems and methods for associating media content with viewer expressions | |
US20080219658A1 (en) | Real time transmission of photographic images from portable handheld devices | |
WO2010038112A1 (en) | System and method for capturing an emotional characteristic of a user acquiring or viewing multimedia content | |
TWI556640B (en) | Media file management method and system, and computer-readable medium | |
WO2013189317A1 (en) | Human face information-based multimedia interaction method, device and terminal | |
US9081801B2 (en) | Metadata supersets for matching images | |
US11941048B2 (en) | Tagging an image with audio-related metadata | |
EP3596688A1 (en) | Method for enriching a digital content with spontaneous data | |
US11163822B2 (en) | Emotional experience metadata on recorded images | |
EP1849299B1 (en) | Method and device for audiovisual programme editing | |
EP1709782B1 (en) | Method enabling a user of a mobile telephone to export multimedia data to an electronic data page | |
FR3026594A1 (en) | METHOD, PROGRAM AND DEVICE FOR MARKING CONTENT | |
WO2022263925A1 (en) | Method for operating an electronic device to browse a collection of images | |
FR3138841A1 (en) | Method and device for constructing a knowledge base with the aim of using the application functions of a plurality of software programs in a transversal manner. | |
WO2008006999A1 (en) | System and method of information management | |
WO2014114877A1 (en) | Method for managing documents captured on a mobile device, and device suitable for carrying out said method | |
FR3028976A1 (en) | METHOD AND DEVICE FOR REPRODUCING A SUCCESSION OF CONTENT | |
FR3045880A1 (en) | METHOD FOR CONTROLLING THE CONSULTATION OF DATA RELATING TO A SOFTWARE APPLICATION INSTALLED IN A COMMUNICATION TERMINAL | |
FR3010557A1 (en) | AUTOMATIC PROCESSING OF MULTIMEDIA DATA RELEASED BY A COMMUNICATION TERMINAL | |
FR2934909A1 (en) | Tag applications managing and executing system, has computer communicating instructions to be executed to central server while sending identification value read on identification unit by identification tag reading device | |
KR20160139818A (en) | Method and apparatus for controlling display of contents, and computer program for executing the method | |
FR2961920A1 (en) | WIDGET TV CAPTURE SYSTEM | |
WO2015044590A1 (en) | Method for authenticating a user provided with a first device by a second device | |
FR3015829A1 (en) | METHOD AND SYSTEM FOR TRANSFERRING AUDIO FILE |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20191011 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: ORANGE |
|
RAP3 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: ORANGE |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20211021 |