CN105956170B - Real-time scene information embedding method, Scene realization system and implementation method - Google Patents
Real-time scene information embedding method, Scene realization system and implementation method Download PDFInfo
- Publication number
- CN105956170B CN105956170B CN201610341769.9A CN201610341769A CN105956170B CN 105956170 B CN105956170 B CN 105956170B CN 201610341769 A CN201610341769 A CN 201610341769A CN 105956170 B CN105956170 B CN 105956170B
- Authority
- CN
- China
- Prior art keywords
- audio
- scene
- information
- video document
- smart machine
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/68—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/686—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title or artist information, time, location or usage information, user ratings
Abstract
A kind of real-time scene information embedding method, Scene realization system and Scene realization method, the real-time scene information embedding method include: offer audio-video document;The scene information realized is needed according to audio-video document content check;By scene information labeling processing, label information is formed;The label information is formatted into form format label, the formatting tag can be resolved and be converted into the control instruction to smart machine;The formatting tag is associated with the audio-video document.The above method can use the above method to any audio-video document, and the insertion of real-time scene information is carried out to it, provides the information on basis to the Scene realization during audiovisual.
Description
Technical field
The present invention relates to Audiotechnica field more particularly to a kind of real-time scene information embedding methods.
Background technique
Intelligent scene realizes to be usually to carry out parsing to audio or video file to obtain scene information, is then set by intelligence
Standby adjustment environment, thus the scene in analogue audio frequency or video file, so that user experiences feeling on the spot in person.
Existing audio-video document is generally not provided scene information relevant to audio-video frequency content, to believe in concrete scene
During breath is realized, needs artificially to customize scene information according to audio-video frequency content, set the working condition of smart machine.For
Different audio-video frequency contents needs artificially to reset, and reusability is poor, at high cost.
So how to provide real-time scene information for audio-video document is to realize intelligent scene problem to be solved.
Summary of the invention
The technical problem to be solved by the invention is to provide a kind of real-time scene information embedding methods, Scene realization system
With Scene realization method.
To solve the above-mentioned problems, the present invention provides a kind of real-time scene information embedding methods, comprising: a kind of real-time field
Scape information embedding method characterized by comprising audio-video document is provided;It needs to realize according to audio-video document content check
Scene information;By scene information labeling processing, label information is formed;The label information is formatted into form format
Change label, the formatting tag can be resolved and be converted into the control instruction to smart machine;By the formatting tag
It is associated with the audio-video document.
It optionally, include: to insert formatting tag by the formatting tag and the associated method of the audio-video document
Enter the audio-video document, forms new audio-video document.
Optionally, by the formatting tag and the associated method of the audio-video document include: by formatting tag with
Temporal information integration, forms the plug-in file of audio-video.
Optionally, the label information includes: typical environment class scene information and specific environment description information.
Optionally, the typical environment class scene information includes: grassland, forest, desert, glacier, snowfield, seabeach or big
Sea;Specific environment description information includes: brightness, environment colour temperature, temperature, ambient humidity, wind, rain, vibration, different tastes
Gas or bubble and time started and duration.
Optionally, the label information further include: in audio-video scene information generate strong variations time point and this when
Between put corresponding trigger event, the trigger event is the work event of smart machine;Environment configuration information set and it is each control oneself
The continuous period.
The present invention also provides a kind of Scene realization systems characterized by comprising the calculating list connecting with smart machine
Member has carried out real-time scene insertion, and root with the associated formatting tag of audio-video document, the audio-video document for parsing
Control instruction is formed according to the formatting tag content that parsing obtains, the control instruction is sent to smart machine;Smart machine,
Control instruction work for being sent according to computing unit.
Optionally, the computing unit is also used to obtain the function and parameter information of smart machine, and according to the intelligence
The parameter information and formatting tag content of equipment form control instruction.
The present invention also provides a kind of Scene realization methods characterized by comprising provides audio-video document, the sound view
Frequency file has carried out real-time scene insertion, is associated with formatting tag corresponding with scene information;Parsing is closed with audio-video document
The formatting tag of connection;Control instruction is formed according to the formatting tag content that parsing obtains;The control instruction is sent to
Smart machine;Smart machine works according to control instruction.
Optionally, further includes: obtain the parameter information of smart machine, and according to the parameter information and lattice of the smart machine
Formula label substance forms control instruction.
The present invention is by handling the scene information in audio-video document content, the formatting tag that formation can be resolved,
The formatting tag is associated with audio-video document again, to complete the insertion of real-time scene information to audio-video document, is
The Scene realization for improving audiovisual experience provides the environment and temporal information on basis.
Scene realization system of the invention can parse the associated formatting tag of audio-video document, be formed to smart machine
Control instruction, to carry out Scene realization, without artificially configuring in advance to smart machine, reusability is high.
Detailed description of the invention
Fig. 1 is the flow diagram of the real-time scene embedding grammar of the embodiment of the invention;
Fig. 2 is the structural schematic diagram of the Scene realization system of the embodiment of the invention;
Fig. 3 is the structural schematic diagram of the Scene realization method of the embodiment of the invention.
Specific embodiment
With reference to the accompanying drawing to real-time scene information embedding method provided by the invention, Scene realization system and Scene realization
The specific embodiment of method elaborates.
Referring to FIG. 1, the flow diagram of the implement scene information embedding method for the specific embodiment of the invention, including
Step S101~step S105.
Step S101: audio-video document is provided.
The audio-video document can be audio file or video file.Audio file may is that music, novel recording,
Movie soundtrack file etc., video file can be film, documentary film, TV play, video recording etc..
Step S102: the scene information realized is needed according to audio-video document content check.
The scene information includes the description information of environment and environment, for example, grassland, forest, desert, glacier, snowfield,
The description of the natural environments such as seabeach, sea and relevant context information, for example, temperature, brightness, environment colour temperature, ambient humidity,
Wind, rain, vibration, gas, the bubble of different tastes etc..
The scene information can also include the information of environmental change, have trend slowly to change: such as temperature raising, rain
Become larger, brightness it is gradually dimmed etc.;Also it changes strong environment changing: for example falling in water, blows suddenly, rains suddenly
Deng.
Step S103: by scene information labeling processing, label information is formed.
The scene information in audio-video document content is described with unified label, specifically, the label includes: classical ring
Border class scene information and specific environment description information.
The typical environment class scene information includes: grassland, forest, desert, glacier, snowfield, seabeach or sea etc..It is described
Typical environment class scene is that have a kind of extensive approved scene understanding, such as it is green, micro- for widely recognizing grassland
Wind;Extensive understanding to desert is yellow, sweltering heat, drying.
The specific environment description information include: brightness, environment colour temperature, temperature, ambient humidity, wind, rain, vibration,
The gas or bubble of different tastes and time started and duration etc..Such as: vibration since the t1 moment, continues 1 point
Clock;25 DEG C, since the t2 moment, continue 10 minutes.
As the specific embodiment of the present invention, the label information further include in audio-video scene information generate it is strong
The time point of strong variation and the time point corresponding trigger event, the trigger event are the work event of smart machine.For example,
The t3 moment falls into the environment changing of ice water, and label information corresponds to the t3 moment, cold wind is blown afloat;T4 moment, video content are switched to
The environment of burning sun Gao Zhao in desert, label information corresponds to the t4 moment, high fever neon lamp lights.
As the specific embodiment of the present invention, the label information further include environment configuration information set and respectively
Duration indicates the Environment Trend variation inside a period, for example, in 10 minutes, by environment temperature tune
Whole is 30 DEG C;In 5 minutes, ambient humidity is adjusted to 80% etc..
Step S104: the label information is formatted into form format label, the formatting tag can be resolved
And it is converted into the control instruction to smart machine.
Label information is formatted, i.e., indicates that label information, same coding represent identical label information with coding.Institute
Stating coding can be by computer analyzing, and then is converted into the control instruction to smart machine.
For the label information of different audio-video file, same formatting method can be used, so as to difference
Audio-video document carry out real-time scene insertion after, the formatting tag of the audio-video document can be adopted by computing unit
It parses in the same way.
Step S105: the formatting tag is associated with the audio-video document.
In the specific embodiment of the present invention, by the formatting tag and the associated side of the audio-video document
Method includes: that formatting tag is inserted into the audio-video document, forms new audio-video document.The formatting tag is inserted in
In audio-video document at corresponding scene location.
In another embodiment of the present invention, the formatting tag and the audio-video document is associated
Method includes: to integrate formatting tag and temporal information, forms the plug-in file of audio-video.In the plug-in file of audio-video
Scene appearance sequence in formatting tag and audio-video frequency content corresponds.
The above method forms the formatting mark that can be resolved by handling the scene information in audio-video document content
Label, then the formatting tag is associated with audio-video document, to complete the insertion of real-time scene information to audio-video document.
The above method can be used to any audio-video document, the insertion of real-time scene information is carried out to it, during audiovisual
Scene realization provides the information on basis.Since formatting tag can be parsed by computing unit, and it is converted into smart machine
Control instruction, to realize scenario simulation.So can be carried out for multiple audio-video documents for same set of smart machine
Scene realization, reusability is high, and the smart machine is without allotment in advance.
A specific embodiment of the invention also provides a kind of Scene realization system.
Referring to FIG. 2, for the structural schematic diagram in the Scene realization system.
The smart machine 202 that the Scene realization system includes computing unit 201, is connect with the computing unit 201.
The computing unit 201 is used to parse and the associated formatting tag of audio-video document, and obtained according to parsing
Formatting tag content forms control instruction, and the control instruction is sent to smart machine 202.
The audio-video document has used above-mentioned real-time scene information embedding method to carry out real-time scene insertion, has and institute
The corresponding formatting tag of scene information in audio-video document is stated, the formatting tag, which can be, is embedded in audio-video document
In, it is also possible to be decoded together with audio-video document in the form of plug-in file.
The computing unit is used for after the content that parsing obtains the formatting tag, is converted into smart machine
Control instruction, and the data transmission channel by establishing between computing unit and smart machine, are sent to intelligence for control instruction
Equipment.
For example, the light of ambient enviroment can be adjusted by adjusting lamp or the light of other light sources equipment, color, colour temperature etc.
Information;The humidity of ambient enviroment can be changed by adjusting humidifier, air-conditioning, fan or heating equipment;It can be by adjusting wind
The combination of the equipment such as fan, air-conditioning, air-heater, changes the wind in ambient enviroment;By adjusting air-conditioning, heating refrigeration equipment, change week
Temperature in collarette border;By adjusting vibration seat, vibration environment is simulated;By adjusting water spray, spraying apparatus, rain is formed.
The computing unit can automatically detect the smart machine in ambient enviroment, and detect the ginseng of each smart machine
Number information, to optimize the control plan to smart machine according to the parameter information of the smart machine and formatting tag content
Slightly, control instruction is formed, the Scene realization carried out in audio-video document optimized using existing smart machine.
The control instruction that the smart machine 202 is used to be sent according to computing unit works, and changes ambient enviroment, to realize
Scenario simulation in audio-video document.The smart machine 202 includes air-conditioning, electric heater, vibration seat, humidifier, lamp, wind
Fan, water spray or spraying apparatus etc. can change the smart machine of environment.
Above-mentioned Scene realization system can parse the associated formatting tag of audio-video document, form the control to smart machine
System instruction, to carry out Scene realization, without artificially configuring in advance to smart machine, reusability is high.
A specific embodiment of the invention also provides a kind of Scene realization method.
Referring to FIG. 3, being the flow diagram of the Scene realization method.
Step S301: providing audio-video document, and the audio-video document is associated with formatting mark corresponding with scene information
Label.
The audio-video document has used above-mentioned real-time scene information embedding method to carry out real-time scene insertion, has and institute
The corresponding formatting tag of scene information in audio-video document is stated, the formatting tag, which can be, is embedded in audio-video document
In, it is also possible to be decoded together with audio-video document in the form of plug-in file.
Step S302: parsing and the associated formatting tag of audio-video document.
The formatting tag can be a series of codings corresponding with scene information, can be obtained by computer analyzing
Content corresponding with the formatting tag.
Step S303 forms control instruction according to the formatting tag content that parsing obtains.
The control instruction is for controlling smart machine work, to simulate scene corresponding with label substance is formatted.
Such as the optical information of ambient enviroment can be adjusted by adjusting lamp or the light of other light sources equipment, color, colour temperature etc.;It can be with
By adjusting humidifier, air-conditioning, fan or heating equipment, change the humidity of ambient enviroment;Can by adjusting fan, air-conditioning,
The combination of the equipment such as air-heater, changes the wind in ambient enviroment;By adjusting air-conditioning, heating refrigeration equipment, change in ambient enviroment
Temperature;By adjusting vibration seat, vibration environment is simulated;By adjusting water spray, spraying apparatus, rain is formed.
As a specific embodiment of the invention, step S303 further include: obtain the parameter information of smart machine, and root
Control instruction is formed according to the parameter information and formatting tag content of the smart machine.The work of the smart machine of different parameters
Efficiency is different, so, parameter information and formatting tag content to the smart machine carry out comprehensive consideration, can optimize pair
The control strategy of smart machine forms control instruction, in the progress audio-video document optimized using existing smart machine
Scene realization.
Step S304: the control instruction is sent to smart machine.
The control instruction can be sent to smart machine by wirelessly or non-wirelessly data transfer mode.
Step S305: smart machine works according to control instruction, to complete the Scene realization of audio-video document.
Above-mentioned Scene realization method can form the control to smart machine according to the associated formatting tag of audio-video document
System instruction, so that Scene realization is carried out, without allotment in advance.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
Member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications also should be regarded as
Protection scope of the present invention.
Claims (9)
1. a kind of real-time scene information embedding method characterized by comprising
Audio-video document is provided;
According to the scene information for the scene that audio-video document content check needs to realize by smart machine;
By scene information labeling processing, label information is formed, the label information includes typical environment class scene information
With specific environment description information;
The label information is formatted into form format label, the formatting tag can be resolved and be converted into intelligence
The control instruction of equipment;
The formatting tag is associated with the audio-video document.
2. real-time scene information embedding method according to claim 1, which is characterized in that by the formatting tag and institute
Stating the associated method of audio-video document includes: that formatting tag is inserted into the audio-video document, forms new audio-video document.
3. real-time scene information embedding method according to claim 1, which is characterized in that by the formatting tag and institute
Stating the associated method of audio-video document includes: to integrate formatting tag and temporal information, forms the plug-in file of audio-video.
4. real-time scene information embedding method according to claim 1, which is characterized in that the typical environment class scene letter
Breath includes: grassland, forest, desert, glacier, snowfield, seabeach or sea;Specific environment description information includes: brightness, environment
Colour temperature, temperature, ambient humidity, wind, rain, vibration, the gas of different tastes or bubble and time started and duration.
5. real-time scene information embedding method according to claim 1, which is characterized in that the label information further include:
Scene information generates time point and the time point corresponding trigger event of strong variations in audio-video, and the trigger event is intelligence
The work event of energy equipment;Environment configuration information set and respective duration.
6. a kind of Scene realization system characterized by comprising
The computing unit being connect with smart machine, for parsing and the associated formatting tag of audio-video document, the audio-video
File has used any one of claim 1 to 5 to carry out real-time scene insertion, and according in the formatting tag of parsing acquisition
Appearance forms control instruction, and the control instruction is sent to smart machine;
Smart machine, the control instruction work for being sent according to computing unit.
7. Scene realization system according to claim 6, which is characterized in that the computing unit is also used to obtain intelligence and sets
Standby function and parameter information, and control instruction is formed according to the parameter information of the smart machine and formatting tag content.
8. a kind of Scene realization method characterized by comprising
Audio-video document is provided, it is embedding that the audio-video document has used any one of claim 1 to 5 to carry out real-time scene
Enter, is associated with formatting tag corresponding with scene information;
Parsing and the associated formatting tag of audio-video document;
Control instruction is formed according to the formatting tag content that parsing obtains;
The control instruction is sent to smart machine;
Smart machine works according to control instruction.
9. Scene realization method according to claim 8, which is characterized in that further include: obtain the parameter letter of smart machine
Breath, and control instruction is formed according to the parameter information of the smart machine and formatting tag content.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610341769.9A CN105956170B (en) | 2016-05-20 | 2016-05-20 | Real-time scene information embedding method, Scene realization system and implementation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610341769.9A CN105956170B (en) | 2016-05-20 | 2016-05-20 | Real-time scene information embedding method, Scene realization system and implementation method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105956170A CN105956170A (en) | 2016-09-21 |
CN105956170B true CN105956170B (en) | 2019-07-19 |
Family
ID=56910387
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610341769.9A Expired - Fee Related CN105956170B (en) | 2016-05-20 | 2016-05-20 | Real-time scene information embedding method, Scene realization system and implementation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105956170B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107526849A (en) * | 2017-09-30 | 2017-12-29 | 联想(北京)有限公司 | A kind of data processing method, electronic equipment, data processing equipment and system |
CN107613377A (en) * | 2017-10-16 | 2018-01-19 | 北京奇艺世纪科技有限公司 | A kind of video broadcasting method and device |
CN110475121B (en) * | 2018-05-10 | 2022-02-11 | 腾讯科技(深圳)有限公司 | Video data processing method and device and related equipment |
CN110493090B (en) * | 2019-08-22 | 2022-01-28 | 三星电子(中国)研发中心 | Method and system for realizing intelligent home theater |
CN111442490B (en) * | 2020-04-09 | 2021-12-21 | 青岛海尔空调器有限总公司 | Air conditioner and control method thereof |
CN111442464B (en) * | 2020-04-09 | 2021-12-21 | 青岛海尔空调器有限总公司 | Air conditioner and control method thereof |
CN114608175A (en) * | 2022-02-28 | 2022-06-10 | 青岛海尔空调器有限总公司 | Intelligent adjusting method and intelligent adjusting system for indoor environment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102024009A (en) * | 2010-03-09 | 2011-04-20 | 李平辉 | Generating method and system of video scene database and method and system for searching video scenes |
CN102651035A (en) * | 2012-04-12 | 2012-08-29 | 北京百纳威尔科技有限公司 | Method and device for generating text and mobile terminal |
CN103729409A (en) * | 2013-12-10 | 2014-04-16 | 北京智谷睿拓技术服务有限公司 | Method and device for generating visual scene information |
CN103927341A (en) * | 2014-03-27 | 2014-07-16 | 广州华多网络科技有限公司 | Method and device for acquiring scene information |
CN104462099A (en) * | 2013-09-16 | 2015-03-25 | 联想(北京)有限公司 | Information processing method and electronic equipment |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006127694A (en) * | 2004-11-01 | 2006-05-18 | Sony Corp | Recording medium, recorder, recording method, data retrieval device, data retrieval method and data generator |
US8300953B2 (en) * | 2008-06-05 | 2012-10-30 | Apple Inc. | Categorization of digital media based on media characteristics |
US20160085762A1 (en) * | 2014-09-23 | 2016-03-24 | Smoothweb Technologies Ltd. | Multi-Scene Rich Media Content Rendering System |
-
2016
- 2016-05-20 CN CN201610341769.9A patent/CN105956170B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102024009A (en) * | 2010-03-09 | 2011-04-20 | 李平辉 | Generating method and system of video scene database and method and system for searching video scenes |
CN102651035A (en) * | 2012-04-12 | 2012-08-29 | 北京百纳威尔科技有限公司 | Method and device for generating text and mobile terminal |
CN104462099A (en) * | 2013-09-16 | 2015-03-25 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN103729409A (en) * | 2013-12-10 | 2014-04-16 | 北京智谷睿拓技术服务有限公司 | Method and device for generating visual scene information |
CN103927341A (en) * | 2014-03-27 | 2014-07-16 | 广州华多网络科技有限公司 | Method and device for acquiring scene information |
Also Published As
Publication number | Publication date |
---|---|
CN105956170A (en) | 2016-09-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105956170B (en) | Real-time scene information embedding method, Scene realization system and implementation method | |
CN106376155A (en) | TCP/IP protocol-based stage lamp and control system therefor | |
JP2005523612A (en) | Method and apparatus for data receiver and control apparatus | |
CN101331802A (en) | System and method for creating artificial atmosphere | |
CN105446310B (en) | A kind of air environment manufacturing method and device | |
CN103813115A (en) | Simulated window | |
CN105988369B (en) | Content-driven intelligent household control method | |
CN110493090B (en) | Method and system for realizing intelligent home theater | |
CN111123851A (en) | Method, device and system for controlling electric equipment according to user emotion | |
CN204560395U (en) | A kind of Intelligent potting management system based on micro-letter platform | |
CN103949055A (en) | Ambient light effect in video gaming | |
CN115826418A (en) | Intelligent household control method | |
CN105955045A (en) | Intelligent film-watching scene implementation system and method | |
CN103984313A (en) | Special-effect cinema play control system | |
CN114608170B (en) | Intelligent regulation method and intelligent regulation system for indoor environment | |
KR100934690B1 (en) | Ubiquitous home media reproduction method and service method based on single media and multiple devices | |
CN105955048A (en) | Intelligent home theater | |
KR20150127342A (en) | Network-based audio-integrated smart air-conditioner device | |
CN202196253U (en) | Bedside projection system | |
CN104811830A (en) | Intelligent light and image special effect system | |
CN104633875B (en) | A kind of air control system for air | |
CN112460743A (en) | Scene rendering method, scene rendering device and environment regulator | |
CN109063131B (en) | System and method for outputting content based on structured data processing | |
KR20170106793A (en) | Apparatus and method for controlling devices | |
CN106094941B (en) | A kind of method and system changing indoor scene |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20190719 Termination date: 20200520 |