WO2018098780A1 - 一种交互式广告展示方法、终端及智慧城市交互系统 - Google Patents

一种交互式广告展示方法、终端及智慧城市交互系统 Download PDF

Info

Publication number
WO2018098780A1
WO2018098780A1 PCT/CN2016/108239 CN2016108239W WO2018098780A1 WO 2018098780 A1 WO2018098780 A1 WO 2018098780A1 CN 2016108239 W CN2016108239 W CN 2016108239W WO 2018098780 A1 WO2018098780 A1 WO 2018098780A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
content
reaction
facial
cloud server
Prior art date
Application number
PCT/CN2016/108239
Other languages
English (en)
French (fr)
Inventor
王建迎
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to PCT/CN2016/108239 priority Critical patent/WO2018098780A1/zh
Priority to CN201680003359.1A priority patent/CN107278374B/zh
Publication of WO2018098780A1 publication Critical patent/WO2018098780A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals

Definitions

  • the present application relates to the field of smart cities, and particularly relates to an interactive advertisement display method, a terminal, and a smart city interaction system.
  • the Smart City System uses information and communication technologies to sense, analyze, and integrate key information about urban operational systems to respond intelligently to a variety of needs, including people's death, environmental protection, public safety, urban services, and industrial and commercial activities. Utilize advanced information technology to realize the intelligent operation of the city, and then create a better life for the people in the city and promote the sustainable development of the city.
  • Public advertising screens in existing cities generally only play advertisements directly, or relay hot video information, which is one-way information dissemination, and cannot achieve deep interaction with viewers or users.
  • the improved smart advertisement display can be turned on when someone passes, and the advertisement is targeted by judging the age and gender of the person according to the face recognition technology. But the targeted advertising play does not reflect the fundamental needs of users. Moreover, the improved public advertising screen cannot support the construction of a city smart system.
  • the technical problem mainly solved by the present application is to provide a smart advertisement display capable of realizing human-computer interaction.
  • the terminal is based on the image and audio recognition technology, accurately analyzes the needs of the on-site public, and pushes advertisements that truly meet the needs of the user.
  • the application also relates to a smart city interaction system, which is based on the image and audio recognition technology of the terminal and the cloud server. Management and statistical analysis provide a way to manage cities that are smarter and closer to the needs of the public.
  • the present application provides the following technical solutions.
  • an embodiment of the present application provides an interactive advertisement display method, including the following steps:
  • Advertising data related to the reaction content found according to the content of the reaction
  • the embodiment of the present application further provides an interactive advertisement display terminal, including:
  • a reaction recognition module configured to collect a response of the viewing user to the content played by the display screen, and identify the content of the reaction
  • a display module for placing an advertisement based on the advertisement data.
  • the embodiment of the present application further provides a smart city interaction method, including:
  • At least one city medium accessing the cloud server initiates an interaction request
  • the cloud server selects one of the plurality of advertisement display terminals of the networking as an interactive terminal according to the requirement of the interaction request;
  • At least one city medium for initiating an interaction request At least one city medium for initiating an interaction request
  • the advertisement display terminal includes a reaction recognition module, configured to collect a response of the viewing user to the display content of the display screen, and identify the content of the reaction;
  • the video call module passes The cloud server establishes a video call between the viewing user and the city medium;
  • the cloud server selects one of the plurality of advertisement display terminals of the networking as an interactive terminal according to the requirement of the interaction request and the correlation between the content of the response and the request of the interaction request, and establishes viewing through the interaction terminal and the cloud server.
  • the embodiment of the present application further provides an electronic device, including:
  • At least one processor and,
  • a memory communicatively coupled to the at least one processor, a communication component, an audio data collector, and a video data collector;
  • the memory stores instructions executable by the at least one processor, the instructions being invoked by the at least one processor to invoke data of the audio data collector and the video data collector, and establishing a connection with the cloud server through the communication component to enable the At least one processor is capable of performing the method as described above.
  • the embodiment of the present application further provides a non-transitory computer readable storage medium, where the computer-readable storage medium stores computer-executable instructions for causing a computer to execute the above The method described.
  • the embodiment of the present application further provides a computer program product, the computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions, when When the program instructions are executed by the computer, the computer is caused to perform the method as described above.
  • the utility model has the beneficial effects that the interactive advertisement display method, the terminal and the smart city interaction system provided by the embodiments of the present application push the adaptive advertisement for the real needs of the user in a real-time online interaction manner, and are smarter and more humane;
  • the intelligent advertisement display terminal for realizing human-computer interaction, based on image and audio recognition technology, accurately analyzes the needs of the on-site public, and pushes advertisements that truly meet the needs of users;
  • the smart city interaction system of the present application is based on the image and audio recognition technology of the terminal and the management and statistical analysis of the cloud server, and provides a city interaction mode that manages smarter and closer to the public demand.
  • FIG. 1 is a system framework diagram of a smart city interaction system provided by an embodiment of the present application.
  • FIG. 2 is a block diagram of an interactive advertisement display terminal provided by an embodiment of the present application.
  • FIG. 3 is a block diagram of a cloud server for interactive advertisement display provided by an embodiment of the present application.
  • the interactive advertisement display method, system, terminal, cloud server and smart city interaction system provided by the application are all based on the interactive advertisement display terminal 100, based on the image and audio recognition technology, and the user behavior according to the real-time online interaction and Language response, mining the real needs of users to push ads that meet the user's current needs.
  • FIG. 2 is a block diagram of the interactive advertisement display terminal.
  • the interactive advertisement display terminal 100 includes a processor 110, a user identification module 120, and a reaction identification The module 130, the sending module 140, and the obtaining module 150.
  • the user identification module 120 includes a face recognition module 122.
  • the response recognition module 130 collects the response of the viewing user to the content played by the display screen when the user image is viewed, and identifies the content of the reaction.
  • the various functional modules implement their respective functions under the control of the processor 110.
  • the reaction identification module 130 includes an audio data acquisition module 132, a video data acquisition module 136, and a matching module 138 that identifies the audio content and video data.
  • the reaction identification module 130 only the image acquisition module and the image recognition module may be included.
  • the response identification module 130 only identifies the reaction content from the video data. Specifically, according to the image acquired by the video data collection module 136, the image recognition module identifies the location of the human eye of the viewing user, and determines whether the user is paying attention to the advertisement. Or based on the reading lip software to interpret what the user said.
  • the user identification module 120 obtains an initial image from a display screen of the advertisement display terminal and recognizes a viewing user image in the initial image.
  • the face recognition module 122 is configured to find a user who views the advertisement and has an willingness to interact from the initial image.
  • the face recognition module 122 acquires an elliptical contour from the initial image, and combines the regional color highlights in the elliptical contour as an element point mosaic 3D model, and compares the facial 3D model with the basic model to identify all the faces in the initial image.
  • the definition of the clearest and the symmetry of the face is appropriate for the viewing user image being viewed.
  • the reaction identification module 130 collects video data of a set time period by the video data collection module 136 when the user image is viewed, and the audio data collection module 132 collects the set time period. Audio data.
  • the reaction identification module 130 identifies a user facial action based on the video data, and the response recognition module 130 identifies the sound content based on the audio data.
  • the transmitting module 140 transmits the recognized sound content to the cloud server 300 when the user's facial action matches the sound content.
  • the matching module 138 performs matching of the facial motion and the sound according to the identified frequency comparison between the user's facial motion and the audio data.
  • the reaction identification module 130 only from the video
  • the reaction content is identified in the data.
  • the response identification module 130 identifies the viewing user and the response content of the viewing user for the current advertisement display terminal by the image recognition module according to the image acquired by the video data collection module 136.
  • the advertisement display terminal 100 sends the identified reaction content to the cloud server 300 and receives the advertisement data related to the response content returned by the cloud server 300 according to the response content, and then delivers the advertisement based on the advertisement data through the display module.
  • the cloud server includes: a sending module 310, a processing module 320, and a receiving module 330.
  • the recognized sound content sent from the advertisement display terminal 100 is received when the user's facial motion matches the sound content.
  • the user's facial motion and sound content are obtained by acquiring an initial image, identifying a viewing user image in the initial image, and collecting video data and audio data of a set time period when the user image is viewed, and identifying based on the video data.
  • the user's facial action is used to identify the sound content based on the audio data.
  • the processing module 320 determines the advertisement data related to the sound content according to the sound content.
  • the sending module 310 sends the advertisement data.
  • the face recognition module 122 and the audio data collection module 132 of the advertisement display terminal 100 are pre-installed face recognition programs and voice recognition programs for collecting information for viewing user feedback.
  • the advertisement display terminal 100 is connected to the cloud server 300 through a dedicated acceleration network, and uploads the information to the cloud computing big data analysis background of the cloud server 300.
  • the background program analyzes and processes the corresponding information to the advertisement display terminal 100 to realize the advertisement investment. , real-time interaction, human interaction and other scene applications.
  • the advertisement display terminal 100 identifies the user in front of the display screen of the advertisement display terminal 100 through the face recognition module, and adopts the angle, distance, and heat of the face and the eye. Analysis of the data, identifying the user who is watching the screen.
  • the video data collecting module 136 collects the mouth motion of the user
  • the audio data collecting module 132 collects and analyzes whether the source of the sound belongs to the user.
  • the reaction identification module 130 uploads the sound content to the background database in real time ( The cloud server performs analysis processing.
  • the cloud server activates the fixed-point advertisement delivery program to appropriately increase the user's interest. Advertising time, such as receiving a user to discuss "where to buy a house", you can place a real estate advertisement; for example, if the advertisement is "suspected for gender discrimination," the cloud server will switch the user's objectionable advertising content.
  • the advertisement display terminal 100 acquires an initial image of the user in front of the display screen through the face recognition module;
  • the advertisement display terminal 100 sends the sound content to the cloud server 300;
  • the smart advertising screen serves ads based on the ad adjustment instructions.
  • the advertisement display terminal 100 identifies all identifiable users in front of the display screen through the face recognition module, and recognizes the viewing of the viewing screen by analyzing the angular distance between the face and the eye. user.
  • the video data collecting module 136 collects the mouth motion of the user
  • the audio data collecting module 132 collects and analyzes whether the source of the sound belongs to the user.
  • the reaction identification module 130 uploads the sound content to the background database in real time ( The cloud server performs analysis processing.
  • the reaction identification module 130 collects a specific sound (for example, I need an attraction, a restaurant), the cloud server 30 activates navigation, tour guide, catering, and the like to meet user needs.
  • the audio data of the viewing user is collected by the voice recognition module, and the sound content is recognized from the audio data;
  • the cloud server 300 determines that the received sound data matches the specific sound (for example, I want to eat) data saved by the server, and the cloud server 300 sends a corresponding advertisement adjustment instruction (for example, activates the catering system) to the advertisement display terminal 100;
  • a corresponding advertisement adjustment instruction for example, activates the catering system
  • the advertisement display terminal 100 places an advertisement according to an advertisement adjustment instruction (for example, activating a catering system).
  • an advertisement adjustment instruction for example, activating a catering system.
  • FIG. 4 is a flow chart for obtaining video and audio data through an advertisement display terminal to implement an interactive advertisement display method, which is illustrated from the processing perspective of the advertisement display terminal.
  • the interactive display method includes the following steps:
  • Step 410 Acquire an initial image before the display screen, and identify a viewing user image in the initial image.
  • Step 420 Collecting the response of the viewing user to the content played by the display screen, identifying the content of the reaction, and transmitting the identified reaction content to the cloud server; one of the specific implementation methods is: collecting video data and audio data of the set time period, based on The video data identifies the user's facial motion, and the sound content is recognized based on the audio data; when the viewing user image is not found, the initial image before the display screen is continuously acquired;
  • Step 440 Receive advertisement data related to the sound content returned by the cloud server according to the sound content
  • Step 450 Advertise the advertisement according to the advertisement data.
  • the step of identifying the viewing user image in the initial image further comprises: acquiring an elliptical contour from the initial image, splicing the facial 3D model with the regional color bright spot in the elliptical contour as the element point, and comparing the facial 3D model with the basic model to identify All user faces in the initial image, with the clearest definition and the appropriate proportion of face symmetry, are the viewing user images being viewed.
  • the reaction content is obtained by: acquiring an initial image, identifying a viewing user image in the initial image; and collecting a viewing user response to the display content of the display screen when the user image is viewed, identifying the reaction
  • the content is: when the user facial action matches the sound content, receiving the recognized sound content; wherein the user facial motion and the sound content are obtained by acquiring an initial image and recognizing the viewing in the initial image a user image; when the user image is viewed, collecting video data and audio data of a set time period, identifying, by the user, facial motion based on the video data, and identifying the sound content based on the audio data;
  • the cloud server sends the advertisement data back to the advertisement display terminal 100.
  • Figure 1 shows the system framework of the smart city interactive system.
  • the smart city interaction system is also based on the image and audio recognition technology of the interactive advertising display terminal and the management and statistical analysis of the cloud server, providing a way of managing urban interactions that are smarter and closer to the public's needs.
  • the smart city interaction system includes at least one city medium 400, a cloud server 300, and a plurality of advertisement display terminals 100 networked with the cloud server 300.
  • the selected advertisement display terminal 100 has a correlation between the content of the recognized viewing user's reaction and the request of the interaction request, it can also be understood that the host of the urban medium 400 when the viewing user's facial action matches the sound content. 410 establishes a video call with the viewing user through the advertisement display terminal 100 and the cloud server 300.
  • the city medium 400 that accesses the cloud server 300 may be several, and the number depends on the carrying capacity of the cloud server 300.
  • the moderator 410 initiates an interactive request through the city medium 400.
  • the advertisement display terminal realizes the function of collecting, extracting and recognizing user information. Find the user who is watching through image acquisition, sound collection, image recognition, voice recognition, and matching the recognition action to match the audio frequency.
  • the matching module of the advertisement display terminal 100 that is, the interactive terminal, completes the matching of the facial motion and the sound according to the frequency comparison between the recognized user facial motion and the audio data.
  • Example of "Everybody Interaction" scenario When the advertisement display terminal 100 plays a real-time interview with a celebrity, the host 410 selects the audience interaction, the host 410 selects the cloud server 300 to randomly select, the cloud server 300 performs the random selection process, and the cloud server 300 will be in a large amount.
  • the advertisement display terminal 100 randomly selects an advertisement display terminal 100, and through the advertisement display terminal 100, randomly selects the viewer who is watching before the screen, and when a certain viewer verified by the advertisement display terminal 100 is selected, the urban medium 400
  • the image of the viewer is displayed, and the video calling system is activated by the advertisement display terminal 100, so that the viewer and the hosted celebrity communicate in real time, and simultaneously play in all the advertising display terminals 100 to realize the public environment. Real-time interaction.
  • the cloud server 300 sends a video call request to the selected advertisement display terminal 100, that is, the interactive terminal;
  • all the advertisement display terminals 100 display the user image information in full screen, activate the video call system according to the video call request, and realize real-time communication between the user and the celebrity.
  • the application also relates to a smart city interaction method, including:
  • At least one city medium accessing the cloud server initiates an interaction request
  • the interactive terminal completes the following steps:
  • the user's response to the content played by the display screen is collected, the content of the reaction is identified, and the identified reaction content is sent to the cloud server;
  • the cloud server establishes a video call between the viewing user and the city medium through the interaction terminal and the cloud server according to the correlation between the content of the response and the request of the interaction request.
  • FIG. 5 is a flow chart showing a method for acquiring video and audio data through an advertisement display terminal to implement a smart city interaction method.
  • the embodiment of the present application further relates to a smart city interaction method, and the method includes:
  • Step 520 The cloud server displays the advertisements from the networking according to the requirements of the interaction request. Select one of them as an interactive terminal;
  • the interactive terminal completes the following steps:
  • Step 530 Acquire an initial image before the display screen, and identify a viewing user image in the initial image.
  • Step 550 When the user's facial action matches the sound content, the interactive terminal and the cloud server establish a video call between the viewing user and the city medium; when the user's facial motion does not match the sound content, the recognition error is indicated. Returning to obtain the initial image before the display screen and then identifying the viewing user image in the initial image. If the multiple recognition and verification is unsuccessful, the cloud server returns no user identification information, and the cloud server can randomly select another advertisement display terminal again. 100 identifies and verifies until a user with a video connection is found.
  • the interactive advertisement display method, the terminal and the smart city interaction system provided by the embodiments of the present application push the adaptive advertisement according to the real needs of the user in a real-time online interaction manner, which is smarter and more human; the intelligent human-computer interaction of the present application is realized.
  • the advertisement display terminal based on the image and audio recognition technology, accurately analyzes the needs of the on-site public, and pushes the advertisement that truly meets the user's needs;
  • the smart city interaction system of the present application is based on the image and audio recognition technology of the terminal and the management and statistical analysis of the cloud server, providing one A way of managing urban interactions that are smarter and closer to the needs of the public.
  • FIG. 6 is a schematic diagram of the hardware structure of the electronic device 600 of the interactive advertisement display method provided by the embodiment of the present application. As shown in FIG. 6, the electronic device 600 includes:
  • One or more processors 610, a memory 620, a human-machine interaction unit 630, a display unit 640, and a communication component 650 are exemplified by one processor 610 in FIG.
  • the human-machine interaction unit 630 includes an audio data collector and a video data collector.
  • the memory 620 stores instructions executable by the at least one processor 610, the instructions being invoked by the at least one processor to invoke data of the audio data collector and the video data collector, and the communication component 650 establishes a connection with the cloud server. To enable the at least one processor to execute the interactive advertisement presentation method.
  • the processor 610, the memory 620, the display unit 640, and the human-machine interaction unit 630 may be connected by a bus or other means, and the connection by a bus is taken as an example in FIG.
  • the memory 620 is a non-volatile computer readable storage medium, and is applicable to a non-volatile software program, a non-volatile computer executable program, and a module, as in the interactive advertisement display method in the embodiment of the present application.
  • Program instructions/modules eg, user identification module 120, reaction identification module 130, transmission module 140, and acquisition module 150 shown in FIG. 2).
  • the processor 610 executes various functional applications and data processing of the server by running non-volatile software programs, instructions, and modules stored in the memory 620, that is, implementing the interactive advertisement display method in the above method embodiments.
  • the memory 620 can include a storage program area and an storage data area, wherein the storage program area can store an operating system, an application required for at least one function; the storage data area can store data created according to the use of the interactive advertisement display electronic device, and the like. .
  • memory 620 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
  • memory 620 can optionally include memory remotely located relative to processor 610 that can be connected to the interactive advertising display electronic device via a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the one or more modules are stored in the memory 620, and after the user completes the setting interaction of the private content library through the human-machine interaction unit 630, when executed by the one or more processors 610, any of the above methods are performed.
  • the interactive advertisement display method in the embodiment, for example, performing the method steps 410 to 450 in FIG. 4 described above, implementing the user identification module 120, the reaction identification module 130, the sending module 140, the obtaining module 150, and the like in FIG. The function.
  • the electronic device of the embodiment of the present application exists in various forms, including but not limited to:
  • Mobile communication devices These devices are characterized by mobile communication functions and are mainly aimed at providing voice and data communication.
  • Such terminals include: smart phones (such as i Phone), multimedia phones, functional phones, and low-end phones.
  • Ultra-mobile personal computer equipment This type of equipment belongs to the category of personal computers, has computing and processing functions, and generally has mobile Internet access.
  • Such terminals include: PDA, MID and UMPC devices, etc.
  • PDA personal digital assistant
  • MID MID
  • UMPC UMPC devices
  • Portable entertainment devices These devices can display and play multimedia content. Such devices include: audio, video players (such as iPod), handheld game consoles, e-books, and smart toys and portable car navigation devices.
  • the server consists of a processor, a hard disk, a memory, a system bus, etc.
  • the server is similar to a general-purpose computer architecture, but because of the need to provide highly reliable services, processing power and stability High reliability in terms of reliability, security, scalability, and manageability.
  • the embodiment of the present application provides a non-transitory computer readable storage medium storing computer-executable instructions that are executed by one or more processors, such as in FIG.
  • the processor 610 is configured to enable the one or more processors to perform the interactive advertisement display method in any of the foregoing method embodiments, for example, to perform the method steps 410 to 450 in FIG. 4 described above, to implement FIG. 2
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Social Psychology (AREA)
  • Strategic Management (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Accounting & Taxation (AREA)
  • Marketing (AREA)
  • Computer Graphics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Economics (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Transfer Between Computers (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

一种交互式广告展示方法,包括以下步骤:采集观看用户对显示屏播放内容的反应,识别该反应的内容,将识别的反应内容发送至云端服务器;接收云端服务器根据该反应内容返回的与反应内容相关的广告数据;根据广告数据投放广告。

Description

一种交互式广告展示方法、终端及智慧城市交互系统 技术领域
本申请涉及智慧城市领域,具体涉及一种交互式广告展示方法、终端及智慧城市交互系统。
背景技术
随着物联网、网络传输和大数据科技的发展,智慧城市系统有了技术基础。智慧城市系统运用信息和通信技术手段感测、分析、整合城市运行系统的各项关键信息,从而对包括民生、环保、公共安全、城市服务、工商业活动在内的各种需求做出智能响应。利用先进的信息技术,实现城市智慧式运行,进而为城市中的人创造美好的生活,促进城市的可持续发展。
现有城市中的公共广告屏,一般直接仅播放广告,或者转播热点视讯信息,是单向的信息传播,不能实现与观众或者用户的深入互动。
例如中国专利申请号201310282805.5公开了一种节约能源,并且能够针对不同人群播放不同广告内容的智能广告显示屏,包括:人体感应器、显示屏、存储器、摄像头以及中央处理器。本发明的智能广告显示屏,能够判断是否有人经过,从而开启或关闭显示屏,以节约电能;同时能够根据人脸识别技术判断人的年龄、性别,从而有针对性地播放适合该年龄、性别人群的产品的广告。
该改进的智能广告显示屏虽然可以实现在有人经过时开启,并且通过根据人脸识别技术判断人的年龄、性别从而有针对性地播放广告。但是该有针对性的广告播放并不能从反应用户根本的需求。并且,该改进的公共广告屏也不能支持城市智慧系统的建设。
因此,现有技术的智能广告显示屏还有待于改进。
发明内容
本申请主要解决的技术问题是提供一种可以实现人机交互的智能广告展示 终端,该终端基于图像音频识别技术,精准分析现场公众的需求,推送真正满足用户需要的广告,同时,本申请还涉及一种智慧城市交互系统,该系统基于终端的图像音频识别技术以及云端服务器的管理和统计分析,提供一种管理更智慧和更贴近公众需求的城市交互方式。
为解决上述技术问题,本申请提供以下技术方案。
第一方面,本申请实施例提供了一种交互式广告展示方法,包括以下步骤:
采集观看用户对显示屏播放内容的反应;
识别该观看用户反应的内容;
根据该反应内容查找的与反应内容相关的广告数据;
根据广告数据投放广告。
第二方面,本申请实施例还提供了一种交互式广告展示终端,包括:
反应识别模块,用于采集观看用户对显示屏播放内容的反应,识别该反应的内容;
获取模块,用于根据该反应内容查找的与反应内容相关的广告数据;以及
显示模块,用于根据该广告数据投放广告。
第三方面,本申请实施例还提供了一种智慧城市交互方法,包括:
至少一接入云端服务器的城市媒介发起交互请求;
该云端服务器根据该交互请求的要求,从组网的若干广告展示终端选择其一作为交互终端;
该交互终端完成以下步骤:
采集观看用户对显示屏播放内容的反应,识别该反应的内容,将识别的反应内容发送至云端服务器;
该云端服务器根据该反应内容与该交互请求的要求的相关性,通过该交互终端和该云端服务器建立观看用户与该城市媒介之间的视频通话。
第四方面,本申请实施例还提供了一种智慧城市交互系统,包括:
至少一城市媒介,用于发起交互请求;
云端服务器,该城市媒介接入该云端服务器;以及
若干广告展示终端,该若干广告展示终端与该云端服务器组网,该广告展示终端包括反应识别模块,用于采集观看用户对显示屏播放内容的反应,识别该反应的内容;视频通话模块,通过该云端服务器建立观看用户与该城市媒介之间的视频通话;
其中,该云端服务器根据该交互请求的要求以及该反应内容与该交互请求的要求的相关性,从组网的若干广告展示终端选择其一作为交互终端,通过该交互终端和该云端服务器建立观看用户与该城市媒介之间的视频通话。
第五方面,本申请实施例还提供了一种电子设备,包括:
至少一个处理器;以及,
与该至少一个处理器通信连接的存储器,通信组件、音频数据采集器以及视频数据采集器;其中,
该存储器存储有可被该至少一个处理器执行的指令,该指令被该至少一个处理器执行时调用音频数据采集器与视频数据采集器的数据,通过通信组件与云端服务器建立连接,以使该至少一个处理器能够执行如上所述的方法。
第六方面,本申请实施例还提供了一种非易失性计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使计算机执行如上所述的方法。
第七方面,本申请实施例还提供了一种计算机程序产品,所述计算机程序产品包括存储在非易失性计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行如上所述的方法。
本申请的有益效果在于,本申请实施例提供的交互式广告展示方法、终端及智慧城市交互系统,以实时在线互动的方式针对用户的真实需求推送适应性的广告,更智能更人文;本申请的实现人机交互的智能广告展示终端,基于图像音频识别技术,精准分析现场公众的需求,推送真正满足用户需要的广告; 本申请的智慧城市交互系统基于终端的图像音频识别技术以及云端服务器的管理和统计分析,提供一种管理更智慧和更贴近公众需求的城市交互方式。
附图说明
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。
图1是本申请实施例提供的智慧城市交互系统的系统框架图;
图2是本申请实施例提供的交互式广告展示终端的模块图;
图3是本申请实施例提供的用于交互式广告展示的云端服务器的模块图;
图4是本申请实施例提供的交互式广告展示方法其中一实施例的流程示意图;
图5是本申请实施例提供的智慧城市交互方法其中一实施例的流程示意图;以及
图6是本申请实施例提供的交互式广告展示方法的电子设备的硬件结构示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。
本申请提供的交互式广告展示方法、系统、终端、云端服务器以及智慧城市交互系统,均以交互式广告展示终端100为基础,基于图像音频识别技术,以实时在线互动的方式根据用户的行为以及语言反应,挖掘用户的真实需求推送满足用户当前需求的广告。
请参考图2,所示为该交互式广告展示终端的模块图。
该交互式广告展示终端100包括处理器110、用户识别模块120、反应识别 模块130、发送模块140以及获取模块150。该用户识别模块120包括人脸识别模块122。该反应识别模块130在找到观看用户图像时,采集观看用户对显示屏播放内容的反应,识别该反应的内容。各个功能模块在处理器110的控制下实现各自功能。
在优选的实施方式中,该反应识别模块130包括音频数据采集模块132、视频数据采集模块136以及匹配模块138,该反应内容从音频数据和视频数据中识别。
或者作为反应识别模块130的一种实施方式,可以仅包括图像采集模块以及图像识别模块。该反应识别模块130仅从视频数据中识别反应内容,具体来说,根据视频数据采集模块136获取的图像,通过图像识别模块识别出观看用户的人眼位置,确定用户是否正在关注投放的广告,或者基于读唇软件解读出用户所说的内容。
该用户识别模块120从广告展示终端的显示屏获取初始图像,并识别出该初始图像中的观看用户图像。
该人脸识别模块122用于从初始图像中找出观看广告并有交互意愿的用户。该人脸识别模块122从该初始图像中获取椭圆轮廓,以椭圆轮廓内的区域颜色亮点作为元素点拼接面部3D模型,将面部3D模型与基本模型对比,识别出该初始图像中所有的人脸,定义最清晰的并且面部对称比例适当的为正在观看的观看用户图像。
在反应识别模块130的一种实施方式中,该反应识别模块130在找到观看用户图像时,通过视频数据采集模块136采集设定时间段的视频数据,音频数据采集模块132采集设定时间段的音频数据。该反应识别模块130基于视频数据识别用户面部动作,该反应识别模块130基于音频数据识别声音内容。该发送模块140在该用户面部动作与声音内容匹配时,将识别的声音内容发送至云端服务器300。
该获取模块150接收该云端服务器300根据声音内容返回的与声音内容相关的广告数据该显示模块180根据该广告数据投放广告。
该匹配模块138根据识别的该用户面部动作与音频数据的频率对比,完成面部动作与声音的匹配。
在反应识别模块130的另一种实施方式中,该反应识别模块130仅从视频 数据中识别反应内容。该反应识别模块130根据视频数据采集模块136获取的图像,通过图像识别模块识别出观看用户以及解读出该观看用户针对当前广告展示终端播放内容的反应内容。广告展示终端100将识别的反应内容发送至云端服务器300并接收该云端服务器300根据反应内容返回的与反应内容相关的广告数据,再通过显示模块基于该广告数据投放广告。
为了实现实时在线推送满足用户当前需求的广告,该交互式广告展示终端100配合云端服务器300,广告展示终端100基于图像音频识别技术分析挖掘用户的需求,云端服务器300根据声音内容在云端查找与声音内容相关性高的广告内容进行广告定点投放。
请参考图3,所示为用于交互式广告展示的云端服务器的模块图。该云端服务器包括:发送模块310、处理模块320以及接收模块330。
该接收模块330接收识别的反应内容,其中,该反应内容通过以下方法获得:获取初始图像,识别该初始图像中的观看用户图像;找到观看用户图像时,采集观看用户对显示屏播放内容的反应,识别该反应的内容。
或者在另一实施方式中,在用户面部动作与声音内容匹配时,接收发自该广告展示终端100的识别声音内容。其中,该用户面部动作与声音内容通过以下方法获得:获取初始图像,识别该初始图像中的观看用户图像;找到观看用户图像时,采集设定时间段的视频数据和音频数据,基于视频数据识别用该户面部动作,基于音频数据识别该声音内容。
该处理模块320根据声音内容确定的与声音内容相关的广告数据。该发送模块310发送该广告数据。
在一种实施例中,该广告展示终端100的人脸识别模块122和音频数据采集模块132为预装的人脸识别程序和声音识别程序,用于收集观看用户反馈的信息。该广告展示终端100通过专用加速网连接云端服务器300,将信息上传至云端服务器300的云计算大数据分析后台,后台程序经过分析处理,下发相应的信息给该广告展示终端100,实现广告定投,实时交互,人人交互等场景应用。
具体实现场景广告定投示例:该广告展示终端100通过人脸识别模块,识别该广告展示终端100显示屏前的用户,通过对面部和眼部角度、距离、热度 等数据的分析,识别正在观看屏幕的用户。同时视频数据采集模块136采集用户的嘴部动作,音频数据采集模块132采集并分析声音的来源是否属于该用户,对属于该用户的声音内容,该反应识别模块130实时将声音内容上传后台数据库(云端服务器)进行分析处理,当该反应识别模块130采集到特定声音(比如太逊了,太棒了,有创意,买房子等词汇),云端服务器会激活定点广告投放程序,适当增加用户感兴趣的广告时间,比如收到用户在讨论“在哪儿买房子”,就可以投放房产广告;比如听到这个广告“涉嫌性别歧视”,云端服务器会切换用户反感的广告内容。
以下为第一种广告定投场景介绍,具体执行流程是:
该广告展示终端100通过人脸识别模块获取显示屏前的用户初始图像;
从初始图像中识别正在观看广告的观看用户图像;
获取观看用户的视频数据和音频数据,判断初始图像用户的嘴部是否在进行动作;
若用户面部动作与声音内容匹配,则该广告展示终端100通过声音识别模块采集该观看用户的音频数据,并从音频数据中识别出声音内容;
该广告展示终端100将声音内容发送给云端服务器300;
云端服务器300判断接收到的声音内容如果与本地保存的特定声音(例如买房子)数据匹配,则云端服务器300发送对应的广告调整指令(例如投放房产广告)给广告展示终端100;
智能广告屏根据广告调整指令投放广告。
以下为第二种实时交互广告定投场景介绍:广告展示终端100通过人脸识别模块,识别显示屏前的所有可识别用户,通过对面部和眼部角度距离的分析,识别出正在观看屏幕的观看用户。同时视频数据采集模块136采集用户的嘴部动作,音频数据采集模块132采集并分析声音的来源是否属于该用户,对属于该用户的声音内容,该反应识别模块130实时将声音内容上传后台数据库(云端服务器)进行分析处理,当该反应识别模块130采集到特定声音(比如我需景点,餐饮),云端服务器30会激活导航,导游,餐饮等系统,满足用户需求。
具体执行流程是:
该广告展示终端100通过人脸识别模块获取显示屏前的用户初始图像;
从该广告展示终端100中识别正在观看该显示屏的观看用户图像;
获取观看用户的视频数据和音频数据,判断初始图像用户的嘴部是否在进行动作;
若用户面部动作与声音内容匹配,则若用户面部动作与声音内容匹配,通过声音识别模块采集该观看用户的音频数据,并从音频数据中识别出声音内容;
该广告展示终端100将声音内容发送给云端服务器300;
云端服务器300判断接收到的声音数据与服务器保存的特定声音(例如我要吃饭)数据匹配,则云端服务器300发送对应的广告调整指令(例如激活餐饮系统)给该广告展示终端100;
该广告展示终端100根据广告调整指令投放广告(例如激活餐饮系统)。
请参考图4所示为通过广告展示终端获取视频以及音频数据以实现交互式广告展示方法的流程图,从广告展示终端的处理角度来阐述。
该交互式广告展示方法,包括以下步骤:
步骤410:获取显示屏前的初始图像,识别该初始图像中的观看用户图像;
步骤420:采集观看用户对显示屏播放内容的反应,识别该反应的内容,将识别的反应内容发送至云端服务器;其中一个具体实施方为:采集设定时间段的视频数据和音频数据,基于视频数据识别用户面部动作,基于音频数据识别声音内容;没有找到观看用户图像时,则继续获取显示屏前的初始图像;
步骤430:当该用户面部动作与声音内容匹配时,将识别的声音内容发送至云端服务器;当该用户面部动作与声音内容不匹配时,说明识别错误,返回继续获取显示屏前的初始图像再识别该初始图像中的观看用户图像;
步骤440:接收云端服务器根据声音内容返回的与声音内容相关的广告数据;
步骤450:根据广告数据投放广告。
该识别该初始图像中的观看用户图像的步骤还具体包括从初始图像中获取椭圆轮廓,以椭圆轮廓内的区域颜色亮点作为元素点拼接面部3D模型,将面部3D模型与基本模型对比,识别出初始图像中所有的用户人脸,定义最清晰的并且面部对称比例适当的为正在观看的观看用户图像。
从云端服务器的处理过程来阐述该交互式广告展示方法,本申请实施例提供的交互式广告展示方法,包括以下步骤:
接收识别的反应内容,其中,该反应内容由以下方法获得:获取初始图像,识别该初始图像中的观看用户图像;找到观看用户图像时,采集观看用户对显示屏播放内容的反应,识别该反应的内容;其中一具体实施方式为:用户面部动作与声音内容匹配时,接收识别的声音内容;其中,该用户面部动作与声音内容由以下方法获得:获取初始图像,识别该初始图像中的观看用户图像;找到观看用户图像时,采集设定时间段的视频数据和音频数据,基于视频数据识别用该户面部动作,基于音频数据识别该声音内容;
根据反应内容确定与反应内容相关的广告数据;对应具体实施方式,该反应内容是声音内容;
发送该广告数据。该云端服务器将广告数据返回发送至该该广告展示终端100。
请参考图1,所示为智慧城市交互系统的系统框架图。该智慧城市交互系统也基于交互式广告展示终端的图像音频识别技术以及云端服务器的管理和统计分析,提供一种管理更智慧和更贴近公众需求的城市交互方式。
该智慧城市交互系统包括至少一城市媒介400、云端服务器300以及与该云端服务器300组网的若干广告展示终端100。
其中,当选中的广告展示终端100在识别的观看用户的反应内容与该交互请求的要求有相关性时,也可以理解为该观看用户面部动作与声音内容匹配时,该城市媒介400的主持人410通过广告展示终端100和该云端服务器300建立与观看用户之间的视频通话。
该接入云端服务器300的城市媒介400可以是若干,数量取决于云端服务器300的承载能力。该主持人410通过城市媒介400发起交互请求。
该广告展示终端实现用户信息采集、提取和识别功能。通过图像采集、声音采集、图像识别、声音识别,并结合识别动作匹配音频频率,找到正在观看的用户。
在音频数据和视频数据都采集的实施例中,该广告展示终端的用户识别模块用于获取显示屏前的初始图像,识别该初始图像中的观看用户图像。该广告展示终端的反应识别模块用于找到观看用户图像时,采集设定时间段的视频数据和音频数据,基于视频数据识别用户面部动作,基于音频数据识别声音内容。该广告展示终端的视频通话模块用于当该用户面部动作与声音内容匹配时,通过该云端服务器建立观看用户与该城市媒介之间的视频通话。可以理解的是,可以仅通过视频数据来完成反应的识别,在找到观看用户图像时,通过视频数据采集观看用户对显示屏播放内容的反应,识别该反应的内容。
该云端服务器300根据城市媒介400的交互请求的要求以及该反应内容与该交互请求的要求的相关性,从组网的若干广告展示终端100选择其一作为交互终端,通过该交互终端和该云端服务器建立观看用户与该城市媒介之间的视频通话。其中,该该云端服务器300选择广告展示终端100之一作为交互终端,可以随机选择,或者定点选择,或者根据参与用户的地理坐标选取使用的广告展示终端100,实施方式可以多种。
在音频数据和视频数据都采集的实施例中,广告展示终端100亦即交互终端的匹配模块根据识别的用户面部动作与音频数据的频率对比,完成面部动作与声音的匹配。
该智慧城市交互系统可以实现人人交互,可以应用的场景很多。比如问卷调查、民意收集、打击犯罪、实时广播等等。
“人人交互”场景示例:广告展示终端100播放对名人实时访谈时,主持人410选择观众互动,主持人410选择云端服务器300随机选择,云端服务器300做随机选择处理,云端服务器300会在大量的广告展示终端100中随机选择一个广告展示终端100,并通过该广告展示终端100,随机选定屏幕前正在观看的观众,当通过广告展示终端100验证的某一观众被选中时,城市媒介400会显示该观众的影像,并通过广告展示终端100激活视频通话系统,实现观众和主持的名人实时交流,并在所有广告展示终端100中同时播放,实现公众环境 的实时交互。
该人人交互场景的大致执行流程是:
云端服务器300向选中的广告展示终端100亦即交互终端发送视频通话请求;
交互终端接收到视频通话请求后,通过人脸识别模块和验证获取显示屏前的用户视频和音频数据,通过动作识别和声音识别再进行频率匹配确定正在观看用户的正确识别后,开启视频通话系统,建立观看用户与该主持人410的视频通话。
可选的,所有广告展示终端100全屏显示该用户图像信息,根据视频通话请求激活视频通话系统,实现该用户和名人实时交流。
本申请还涉及智慧城市交互方法,包括:
至少一接入云端服务器的城市媒介发起交互请求;
该云端服务器根据该交互请求的要求,从组网的若干广告展示终端选择其一作为交互终端;
该交互终端完成以下步骤:
获取显示屏前的初始图像,识别该初始图像中的观看用户图像;
找到观看用户图像时,采集观看用户对显示屏播放内容的反应,识别该反应的内容,将识别的反应内容发送至云端服务器;
该云端服务器根据该反应内容与该交互请求的要求的相关性,通过该交互终端和该云端服务器建立观看用户与该城市媒介之间的视频通话。
请参考图5所示,所示为通过广告展示终端获取视频以及音频数据以实现智慧城市交互方法的流程示意图。
本申请实施例还涉及智慧城市交互方法,该方法包括:
步骤510:至少一接入云端服务器的城市媒介发起交互请求;
步骤520:该云端服务器根据该交互请求的要求,从组网的若干广告展示终 端选择其一作为交互终端;
该交互终端完成以下步骤:
步骤530:获取显示屏前的初始图像,识别该初始图像中的观看用户图像;
步骤540:找到观看用户图像时,采集设定时间段的视频数据和音频数据,基于视频数据识别用户面部动作,基于音频数据识别声音内容;没有找到观看用户图像时,则继续获取显示屏前的初始图像;
步骤550:当该用户面部动作与声音内容匹配时,通过该交互终端和该云端服务器建立观看用户与该城市媒介之间的视频通话;当该用户面部动作与声音内容不匹配时,说明识别错误,返回继续获取显示屏前的初始图像再识别该初始图像中的观看用户图像,如果多次识别和验证未成功,则返回云端服务器无用户识别信息,云端服务器可再次随机选择另一广告展示终端100进行识别和验证直至找到可视频连接的用户。
本申请实施例提供的交互式广告展示方法、终端及智慧城市交互系统,以实时在线互动的方式针对用户的真实需求推送适应性的广告,更智能更人文;本申请的实现人机交互的智能广告展示终端,基于图像音频识别技术,精准分析现场公众的需求,推送真正满足用户需要的广告;本申请的智慧城市交互系统基于终端的图像音频识别技术以及云端服务器的管理和统计分析,提供一种管理更智慧和更贴近公众需求的城市交互方式。
图6是本申请实施例提供的交互式广告展示方法的电子设备600的硬件结构示意图,如图6所示,该电子设备600包括:
一个或多个处理器610、存储器620、人机交互单元630、显示单元640以及通信组件650,图6中以一个处理器610为例。该人机交互单元630包括音频数据采集器以及视频数据采集器。该存储器620存储有可被该至少一个处理器610执行的指令,该指令被该至少一个处理器执行时调用音频数据采集器与视频数据采集器的数据,通过通信组件650与云端服务器建立连接,以使该至少一个处理器能够执行该交互式广告展示方法。
处理器610、存储器620、显示单元640以及人机交互单元630可以通过总线或者其他方式连接,图6中以通过总线连接为例。
存储器620作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本申请实施例中的交互式广告展示方法对应的程序指令/模块(例如,附图2所示的用户识别模块120,反应识别模块130、发送模块140和获取模块150)。处理器610通过运行存储在存储器620中的非易失性软件程序、指令以及模块,从而执行服务器的各种功能应用以及数据处理,即实现上述方法实施例中的交互式广告展示方法。
存储器620可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据交互式广告展示电子设备的使用所创建的数据等。此外,存储器620可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器620可选包括相对于处理器610远程设置的存储器,这些远程存储器可以通过网络连接至交互式广告展示电子设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
所述一个或者多个模块存储在所述存储器620中,在用户通过人机交互单元630完成私密内容库的设置交互以后,当被所述一个或者多个处理器610执行时,执行上述任意方法实施例中的交互式广告展示方法,例如,执行以上描述的图4中的方法步骤410至步骤450,实现图2中的用户识别模块120,反应识别模块130、发送模块140和获取模块150等的功能。
上述产品可执行本申请实施例所提供的方法,具备执行方法相应的功能模块和有益效果。未在本实施例中详尽描述的技术细节,可参见本申请实施例所提供的方法。
本申请实施例的电子设备以多种形式存在,包括但不限于:
(1)移动通信设备:这类设备的特点是具备移动通信功能,并且以提供话音、数据通信为主要目标。这类终端包括:智能手机(例如i Phone)、多媒体手机、功能性手机,以及低端手机等。
(2)超移动个人计算机设备:这类设备属于个人计算机的范畴,有计算和处理功能,一般也具备移动上网特性。这类终端包括:PDA、MID和UMPC设备等, 例如iPad。
(3)便携式娱乐设备:这类设备可以显示和播放多媒体内容。该类设备包括:音频、视频播放器(例如iPod),掌上游戏机,电子书,以及智能玩具和便携式车载导航设备。
(4)服务器:提供计算服务的设备,服务器的构成包括处理器、硬盘、内存、系统总线等,服务器和通用的计算机架构类似,但是由于需要提供高可靠的服务,因此在处理能力、稳定性、可靠性、安全性、可扩展性、可管理性等方面要求较高。
(5)其他具有数据交互功能的电子装置。
本申请实施例提供了一种非易失性计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,该计算机可执行指令被一个或多个处理器执行,例如图6中的一个处理器610,可使得上述一个或多个处理器可执行上述任意方法实施例中的交互式广告展示方法,例如,执行以上描述的图4中的方法步骤410至步骤450,实现图2中的用户识别模块120,反应识别模块130、发送模块140和获取模块150等的功能。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
通过以上的实施方式的描述,本领域普通技术人员可以清楚地了解到各实施方式可借助软件加通用硬件平台的方式来实现,当然也可以通过硬件。本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限 制;在本申请的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本申请的不同方面的许多其它变化,为了简明,它们没有在细节中提供;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (17)

  1. 一种交互式广告展示方法,其特征在于,包括以下步骤:
    采集观看用户对显示屏播放内容的反应;
    识别所述观看用户反应的内容;
    根据所述反应内容查找的与反应内容相关的广告数据;
    根据广告数据投放广告。
  2. 根据权利要求1所述的方法,其特征在于,所述采集观看用户对显示屏播放内容的反应包括:
    采集设定时间段的视频数据,基于视频数据识别用户面部动作;和/或
    采集设定时间段的音频数据,基于音频数据识别声音内容。
  3. 根据权利要求2所述的方法,其特征在于,当所述用户面部动作与声音内容匹配时,根据所述反应内容查找的与反应内容相关的广告数据,其中,用户面部动作与声音内容匹配包括:根据识别的所述用户面部动作与音频数据的频率对比,完成面部动作与声音的匹配。
  4. 根据权利要求1-3任意一项所述的方法,其特征在于,在采集观看用户对显示屏播放内容的反应之前,还包括步骤:获取显示屏前的初始图像,识别所述初始图像中的观看用户图像;找到观看用户图像时,采集观看用户对显示屏播放内容的反应;
    所述识别所述初始图像中的观看用户图像的步骤包括:
    从初始图像中获取椭圆轮廓,以椭圆轮廓内的区域颜色亮点作为元素点拼接面部3D模型,将面部3D模型与基本模型对比,识别出初始图像中所有的用户人脸,定义最清晰的并且面部对称比例适当的为正在观看的观看用户图像。
  5. 一种交互式广告展示终端,其特征在于,包括:
    反应识别模块,用于采集观看用户对显示屏播放内容的反应,识别所述反应的内容;
    获取模块,用于根据所述反应内容查找的与反应内容相关的广告数据;以及
    显示模块,用于根据所述广告数据投放广告。
  6. 根据权利要求5所述的交互式广告展示终端,其特征在于,所述反应识别模块用于采集设定时间段的视频数据,基于视频数据识别用户面部动作;和/或采集设定时间段的音频数据,基于音频数据识别声音内容。
  7. 根据权利要求6所述的交互式广告展示终端,其特征在于,还包括匹配模块,所述匹配模块用于根据识别的所述用户面部动作与音频数据的频率对比,完成面部动作与声音的匹配,其中,所述用户面部动作与声音内容匹配时,根据所述声音内容获取与声音内容相关的广告数据。
  8. 根据权利要求5-7任一项所述的交互式广告展示终端,其特征在于,还包括用户识别模块,所述用户识别模块用于获取显示屏前的初始图像,识别所述初始图像中的观看用户图像,所述用户识别模块包括:
    人脸识别模块,用于从初始图像中获取椭圆轮廓,以椭圆轮廓内的区域颜色亮点作为元素点拼接面部3D模型,将面部3D模型与基本模型对比,识别出所述初始图像中所有的人脸,定义最清晰的并且面部对称比例适当的为正在观看的观看用户图像。
  9. 一种智慧城市交互方法,其特征在于,包括:
    至少一接入云端服务器的城市媒介发起交互请求;
    所述云端服务器根据所述交互请求的要求,从组网的若干广告展示终端选择其一作为交互终端;
    所述交互终端完成以下步骤:
    采集观看用户对显示屏播放内容的反应,识别所述反应的内容,将识别的反应内容发送至云端服务器;
    所述云端服务器根据所述反应内容与所述交互请求的要求的相关性,通过所述交互终端和所述云端服务器建立观看用户与所述城市媒介之间的视频通话。
  10. 根据权利要求9所述的方法,其特征在于,所述采集观看用户对显示屏播放内容的反应包括:
    采集设定时间段的视频数据,基于视频数据识别用户面部动作;和/或
    采集设定时间段的音频数据,基于音频数据识别声音内容。
  11. 根据权利要求9或10所述的方法,其特征在于,在采集观看用户对显示屏播放内容的反应之前,还包括步骤:获取显示屏前的初始图像,识别所述初始图像中的观看用户图像,找到观看用户图像时,采集观看用户对显示屏播放内容的反应;所述识别所述初始图像中的观看用户图像的步骤包括:
    从初始图像中获取椭圆轮廓,以椭圆轮廓内的区域颜色亮点作为元素点拼接面部3D模型,将面部3D模型与基本模型对比,识别出初始图像中所有的用户人脸,定义最清晰的并且面部对称比例适当的为正在观看的观看用户图像。
  12. 一种智慧城市交互系统,其特征在于,包括:
    至少一城市媒介,用于发起交互请求;
    云端服务器,所述城市媒介接入所述云端服务器;以及
    若干广告展示终端,所述若干广告展示终端与所述云端服务器组网,所述广告展示终端包括反应识别模块,用于采集观看用户对显示屏播放内容的反应,识别所述反应的内容;视频通话模块,通过所述云端服务器建立观看用户与所述城市媒介之间的视频通话;
    其中,所述云端服务器根据所述交互请求的要求以及所述反应内容与所述交互请求的要求的相关性,从组网的若干广告展示终端选择其一作为交互终端,通过所述交互终端和所述云端服务器建立观看用户与所述城市媒介之间的视频通话。
  13. 根据权利要求12所述的智慧城市交互系统,其特征在于,所述反应识别模块用于采集设定时间段的视频数据,基于视频数据识别用户面部动作;和/或采集设定时间段的音频数据,基于音频数据识别声音内容。
  14. 根据权利要求12或者13所述的智慧城市交互系统,其特征在于,还包括用户识别模块,用于获取显示屏前的初始图像,识别所述初始图像中的观看 用户图像,所述用户识别模块包括:
    人脸识别模块,用于从初始图像中获取椭圆轮廓,以椭圆轮廓内的区域颜色亮点作为元素点拼接面部3D模型,将面部3D模型与基本模型对比,识别出所述初始图像中所有的人脸,定义最清晰的并且面部对称比例适当的为正在观看的观看用户图像;
    其中,所述反应识别模块在找到观看用户图像时,采集观看用户对显示屏播放内容的反应。
  15. 一种电子设备,其中,包括:
    至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器,通信组件、音频数据采集器以及视频数据采集器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行时调用音频数据采集器与视频数据采集器的数据,通过通信组件与云端服务器建立连接,以使所述至少一个处理器能够执行权利要求1-4任一项所述的方法。
  16. 一种非易失性计算机可读存储介质,其中,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使计算机执行权利要求1-4任一项所述的方法。
  17. 一种计算机程序产品,其中,所述计算机程序产品包括存储在非易失性计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行权利要求1-4任一项所述的方法。
PCT/CN2016/108239 2016-12-01 2016-12-01 一种交互式广告展示方法、终端及智慧城市交互系统 WO2018098780A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2016/108239 WO2018098780A1 (zh) 2016-12-01 2016-12-01 一种交互式广告展示方法、终端及智慧城市交互系统
CN201680003359.1A CN107278374B (zh) 2016-12-01 2016-12-01 一种交互式广告展示方法、终端及智慧城市交互系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/108239 WO2018098780A1 (zh) 2016-12-01 2016-12-01 一种交互式广告展示方法、终端及智慧城市交互系统

Publications (1)

Publication Number Publication Date
WO2018098780A1 true WO2018098780A1 (zh) 2018-06-07

Family

ID=60052578

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/108239 WO2018098780A1 (zh) 2016-12-01 2016-12-01 一种交互式广告展示方法、终端及智慧城市交互系统

Country Status (2)

Country Link
CN (1) CN107278374B (zh)
WO (1) WO2018098780A1 (zh)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110187813A (zh) * 2019-05-09 2019-08-30 深圳报业集团控股公司 一种触摸感应交互系统及交互方法
CN110321002A (zh) * 2019-05-09 2019-10-11 深圳报业集团控股公司 一种场景交互系统及交互方法
CN110348888A (zh) * 2019-06-21 2019-10-18 深圳市元征科技股份有限公司 一种多媒体广告投放方法、装置以及设备
CN111104867A (zh) * 2019-11-25 2020-05-05 北京迈格威科技有限公司 基于部件分割的识别模型训练、车辆重识别方法及装置
CN111553261A (zh) * 2020-04-26 2020-08-18 深圳市易平方网络科技有限公司 一种基于人脸识别的广告效果监测方法、系统及智能终端
CN112434741A (zh) * 2020-11-25 2021-03-02 杭州盛世传奇标识系统有限公司 一种互动介绍标识的使用方法、系统、装置和存储介质
CN112637363A (zh) * 2021-01-05 2021-04-09 上海臻琴文化传播有限公司 一种信息流推送处理方法、系统、装置和存储介质
CN112995773A (zh) * 2019-12-13 2021-06-18 阿里巴巴集团控股有限公司 一种互动视频的交互提示方法、装置、终端及存储介质
CN113099030A (zh) * 2021-03-24 2021-07-09 深圳市联谛信息无障碍有限责任公司 基于超声波的灯光互动方法、移动终端和声音播放设备
CN113159824A (zh) * 2021-03-03 2021-07-23 广州朗国电子科技有限公司 基于人脸识别的广告传媒控制系统
CN113240466A (zh) * 2021-05-12 2021-08-10 武汉轻派壳子数码有限公司 基于大数据深度分析的移动传媒视频数据处理方法、设备及存储介质
CN113395596A (zh) * 2020-03-11 2021-09-14 上海佰贝科技发展股份有限公司 一种基于智能电视的互联网电视互动方法及系统
CN114004645A (zh) * 2021-10-29 2022-02-01 浙江省民营经济发展中心(浙江省广告监测中心) 融媒体广告智慧监测平台和电子设备
CN114666316A (zh) * 2022-03-24 2022-06-24 阿里云计算有限公司 信息处理方法、装置及存储介质
CN116777524A (zh) * 2023-07-18 2023-09-19 北京吉欣科技有限公司 基于人工智能的互动广告投放方法及相关装置

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107799039B (zh) * 2017-11-10 2024-05-07 东莞市极制电子科技有限公司 一种交互式智能灯箱及其控制方法
CN109840009A (zh) * 2017-11-28 2019-06-04 浙江思考者科技有限公司 一种智能真人广告屏交互系统及实现方法
CN107835460A (zh) * 2017-12-18 2018-03-23 维沃移动通信有限公司 一种控制终端播放状态的方法及装置
CN108182602A (zh) * 2018-01-03 2018-06-19 陈顺宝 一种户外多媒体信息移动展示系统
CN110248252B (zh) * 2018-03-08 2023-06-20 上海博泰悦臻网络技术服务有限公司 视频中兴趣点互动方法、系统、电子终端及存储介质
CN108764969A (zh) * 2018-05-02 2018-11-06 天遐科汇(深圳)科技有限公司 一种自助式信息发布系统及方法
WO2019210463A1 (zh) * 2018-05-02 2019-11-07 天遐科汇(深圳)科技有限公司 一种自助式信息发布系统及方法
CN109191171A (zh) * 2018-07-24 2019-01-11 上海常仁信息科技有限公司 一种基于健康机器人的广告系统
CN108985862A (zh) * 2018-08-24 2018-12-11 深圳艺达文化传媒有限公司 电梯广告的查询方法及相关产品
CN109711954A (zh) * 2019-01-29 2019-05-03 福建任我行科技发展有限公司 基于云端大数据平台的可精准投送的鞋类动态展示系统
CN110322268A (zh) * 2019-05-09 2019-10-11 深圳报业集团控股公司 一种基于回波信号的场景交互系统及交互方法
CN110166793A (zh) * 2019-05-09 2019-08-23 东莞康佳电子有限公司 一种基于高清智能电视的内容的推送方法及其系统
CN110880125A (zh) * 2019-10-11 2020-03-13 京东数字科技控股有限公司 虚拟资产的核销方法、装置、服务器及存储介质
CN111241956A (zh) * 2020-01-03 2020-06-05 重庆特斯联智慧科技股份有限公司 一种三维成像感知的智能建筑广告幕墙系统
CN111369296A (zh) * 2020-03-09 2020-07-03 北京市威富安防科技有限公司 广告展示方法、装置、计算机设备和存储介质
CN112040300A (zh) * 2020-09-11 2020-12-04 浙江金澜文化传媒股份有限公司 一种交互式广告展示方法、终端及智慧城市交互系统
CN112288476A (zh) * 2020-10-28 2021-01-29 衡阳淘屏新媒体有限公司 动态投放广告的系统
CN112559438A (zh) * 2020-12-02 2021-03-26 京东数字科技控股股份有限公司 一种广告机
CN113706205A (zh) * 2021-08-31 2021-11-26 湖南三圆惟度品牌整合有限公司 一种基于人像识别的广告投放方法及应用该方法的系统
CN114844887B (zh) * 2022-03-30 2024-04-19 广州市华懋科技发展有限公司 新型互联网独立平台系统及其数据交互方法
CN114863847B (zh) * 2022-05-07 2023-09-08 南京欣威视通信息科技股份有限公司 基于鸿蒙系统开发的人机智能互动式户外广告机

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080059994A1 (en) * 2006-06-02 2008-03-06 Thornton Jay E Method for Measuring and Selecting Advertisements Based Preferences
CN102129644A (zh) * 2011-03-08 2011-07-20 北京理工大学 一种具有受众特性感知与统计功能的智能广告系统
CN102799265A (zh) * 2012-06-26 2012-11-28 宇龙计算机通信科技(深圳)有限公司 一种播放广告的方法、智能广告终端、服务器及系统
CN102881239A (zh) * 2011-07-15 2013-01-16 鼎亿数码科技(上海)有限公司 基于图像识别的广告投播系统及方法
CN106162221A (zh) * 2015-03-23 2016-11-23 阿里巴巴集团控股有限公司 直播视频的合成方法、装置及系统
CN106296289A (zh) * 2016-08-10 2017-01-04 中控智慧科技股份有限公司 一种控制广告投放的方法以及广告投放装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105812717A (zh) * 2016-04-21 2016-07-27 邦彦技术股份有限公司 多媒体会议控制方法及服务器

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080059994A1 (en) * 2006-06-02 2008-03-06 Thornton Jay E Method for Measuring and Selecting Advertisements Based Preferences
CN102129644A (zh) * 2011-03-08 2011-07-20 北京理工大学 一种具有受众特性感知与统计功能的智能广告系统
CN102881239A (zh) * 2011-07-15 2013-01-16 鼎亿数码科技(上海)有限公司 基于图像识别的广告投播系统及方法
CN102799265A (zh) * 2012-06-26 2012-11-28 宇龙计算机通信科技(深圳)有限公司 一种播放广告的方法、智能广告终端、服务器及系统
CN106162221A (zh) * 2015-03-23 2016-11-23 阿里巴巴集团控股有限公司 直播视频的合成方法、装置及系统
CN106296289A (zh) * 2016-08-10 2017-01-04 中控智慧科技股份有限公司 一种控制广告投放的方法以及广告投放装置

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110321002A (zh) * 2019-05-09 2019-10-11 深圳报业集团控股公司 一种场景交互系统及交互方法
CN110187813A (zh) * 2019-05-09 2019-08-30 深圳报业集团控股公司 一种触摸感应交互系统及交互方法
CN110348888B (zh) * 2019-06-21 2024-02-06 深圳市元征科技股份有限公司 一种多媒体广告投放方法、装置以及设备
CN110348888A (zh) * 2019-06-21 2019-10-18 深圳市元征科技股份有限公司 一种多媒体广告投放方法、装置以及设备
CN111104867A (zh) * 2019-11-25 2020-05-05 北京迈格威科技有限公司 基于部件分割的识别模型训练、车辆重识别方法及装置
CN111104867B (zh) * 2019-11-25 2023-08-25 北京迈格威科技有限公司 基于部件分割的识别模型训练、车辆重识别方法及装置
CN112995773A (zh) * 2019-12-13 2021-06-18 阿里巴巴集团控股有限公司 一种互动视频的交互提示方法、装置、终端及存储介质
CN113395596A (zh) * 2020-03-11 2021-09-14 上海佰贝科技发展股份有限公司 一种基于智能电视的互联网电视互动方法及系统
CN111553261A (zh) * 2020-04-26 2020-08-18 深圳市易平方网络科技有限公司 一种基于人脸识别的广告效果监测方法、系统及智能终端
CN112434741A (zh) * 2020-11-25 2021-03-02 杭州盛世传奇标识系统有限公司 一种互动介绍标识的使用方法、系统、装置和存储介质
CN112637363A (zh) * 2021-01-05 2021-04-09 上海臻琴文化传播有限公司 一种信息流推送处理方法、系统、装置和存储介质
CN113159824A (zh) * 2021-03-03 2021-07-23 广州朗国电子科技有限公司 基于人脸识别的广告传媒控制系统
CN113159824B (zh) * 2021-03-03 2023-09-01 广州朗国电子科技股份有限公司 基于人脸识别的广告传媒控制系统
CN113099030A (zh) * 2021-03-24 2021-07-09 深圳市联谛信息无障碍有限责任公司 基于超声波的灯光互动方法、移动终端和声音播放设备
CN113240466A (zh) * 2021-05-12 2021-08-10 武汉轻派壳子数码有限公司 基于大数据深度分析的移动传媒视频数据处理方法、设备及存储介质
CN114004645A (zh) * 2021-10-29 2022-02-01 浙江省民营经济发展中心(浙江省广告监测中心) 融媒体广告智慧监测平台和电子设备
CN114666316A (zh) * 2022-03-24 2022-06-24 阿里云计算有限公司 信息处理方法、装置及存储介质
CN116777524A (zh) * 2023-07-18 2023-09-19 北京吉欣科技有限公司 基于人工智能的互动广告投放方法及相关装置

Also Published As

Publication number Publication date
CN107278374A (zh) 2017-10-20
CN107278374B (zh) 2020-01-03

Similar Documents

Publication Publication Date Title
WO2018098780A1 (zh) 一种交互式广告展示方法、终端及智慧城市交互系统
US9530251B2 (en) Intelligent method of determining trigger items in augmented reality environments
CN105450778B (zh) 信息推送系统
EP2673737B1 (en) A system for the tagging and augmentation of geographically-specific locations using a visual data stream
US8447329B2 (en) Method for spatially-accurate location of a device using audio-visual information
US9183546B2 (en) Methods and systems for a reminder servicer using visual recognition
US20150012840A1 (en) Identification and Sharing of Selections within Streaming Content
US20210385506A1 (en) Method and electronic device for assisting live streaming
US20120203799A1 (en) System to augment a visual data stream with user-specific content
CN111835531B (zh) 会话处理方法、装置、计算机设备及存储介质
WO2017197826A1 (zh) 图像特征关系的匹配方法、装置和系统
WO2015043547A1 (en) A method, device and system for message response cross-reference to related applications
KR20160044902A (ko) 방송 콘텐트와 관련한 부가 정보 제공 방법 및 이를 구현하는 전자 장치
US11568615B2 (en) Collaborative on-demand experiences
CN111312240A (zh) 数据控制方法、装置、电子设备及存储介质
CN112101304B (zh) 数据处理方法、装置、存储介质及设备
US20170171594A1 (en) Method and electronic apparatus of implementing voice interaction in live video broadcast
US20170278130A1 (en) Method and Electronic Device for Matching Advertisement Data
CN114302160B (zh) 信息显示方法、装置、计算机设备及介质
US20200250708A1 (en) Method and system for providing recommended digital content item to electronic device
CN114666643A (zh) 一种信息显示方法、装置、电子设备及存储介质
CN115361588B (zh) 一种对象显示方法、装置、电子设备及存储介质
KR20150129955A (ko) 광고 제공 시스템, 그 시스템에서의 광고 제공을 위한 장치 및 방법
US20200252480A1 (en) Method and system for providing a recommended digital content item
CN113507620B (zh) 直播数据处理方法、装置、设备以及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16922725

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: 1205A, 15.10.2019

122 Ep: pct application non-entry in european phase

Ref document number: 16922725

Country of ref document: EP

Kind code of ref document: A1