CN103294880A - Information output method and device as well as electronic device - Google Patents

Information output method and device as well as electronic device Download PDF

Info

Publication number
CN103294880A
CN103294880A CN2012100466799A CN201210046679A CN103294880A CN 103294880 A CN103294880 A CN 103294880A CN 2012100466799 A CN2012100466799 A CN 2012100466799A CN 201210046679 A CN201210046679 A CN 201210046679A CN 103294880 A CN103294880 A CN 103294880A
Authority
CN
China
Prior art keywords
parameter
environment
information
audio parameter
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012100466799A
Other languages
Chinese (zh)
Other versions
CN103294880B (en
Inventor
张浦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201210046679.9A priority Critical patent/CN103294880B/en
Publication of CN103294880A publication Critical patent/CN103294880A/en
Application granted granted Critical
Publication of CN103294880B publication Critical patent/CN103294880B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
  • Stereophonic System (AREA)

Abstract

The invention discloses an information output method used for providing an environment cognition medium for a user. The method comprises the steps as follows: acquiring environment information of a first environment, wherein the first environment comprises at least one object, and the at least one object is provided with corresponding position parameters respectively; transforming the position parameters corresponding to the at least one object in the first environment into first sound parameters respectively; and outputting first sound information according to the first sound parameters. The invention further discloses a device for implementing the method and an electronic device.

Description

A kind of information output method, device and electronic equipment
Technical field
The present invention relates to signal processing technology, particularly a kind of information output method, device and electronic equipment.
Background technology
In the actual life, the blind person need rely on blind-guide device to assist walking when walking alone.
Occurred various electronics blind-guide devices in the market, wherein, more common comprising has mobile phone or other hand-held class blind guide of guiding function.
For example, blind guide can be stored different blind in sign in advance, comprises prestore blind during with sign in determining the image that the blind person takes, and by voice the blind person is pointed out, perhaps blind guide obtains figure and signal by sensor, by voice the blind person is pointed out.
The inventor finds to exist in the prior art following technical matters at least in the process that realizes the embodiment of the present application technical scheme:
Need in the prior art to store the different blind signs of using in advance, also need the object of environment periphery is carried out the label sign, comparatively loaded down with trivial details, and can only describe for the user provide the title of object and the text-to-speech of position, and can not make the user that visual understanding is arranged.
Summary of the invention
The embodiment of the invention provides a kind of information output method, device and electronic equipment, is used to the user to provide a kind of environment cognitive media.
A kind of information output method may further comprise the steps:
Obtain the environmental information of first environment, comprise at least one object in the described first environment, described at least one object has the location parameter corresponding with it respectively;
The corresponding location parameter of at least one object described in the described first environment is separately converted to first audio parameter;
Export first acoustic information according to described first audio parameter.
Preferable, described at least one object in the described first environment has the vision parameter corresponding with it respectively;
After being separately converted to first audio parameter, the corresponding location parameter of at least one object described in the described first environment also comprises step: the corresponding vision parameter of described at least one object is separately converted to second audio parameter;
The step of exporting first acoustic information according to described first audio parameter comprises: according to described first audio parameter and described second audio parameter, export described first acoustic information and second acoustic information.
Preferable, the step that the corresponding vision parameter of described at least one object is separately converted to second audio parameter comprises:
Obtain the corresponding vision parameter of described at least one object; Described vision parameter comprises visual color and/or visual shape at least;
The described vision parameter that obtains is separately converted to second audio parameter.
Preferable, the step that the described vision parameter that obtains is separately converted to second audio parameter comprises:
Obtain the corresponding relation set of described vision parameter and described second audio parameter, comprise the corresponding relation of at least one described vision parameter and described second audio parameter in the described corresponding relation set;
According to the corresponding relation that comprises in the described vision parameter that obtains and the set of described corresponding relation, determine described second audio parameter.
Preferable, described first audio parameter comprises the orientation at least, described second audio parameter comprises one or more in frequency, volume and the tone color at least.
Preferable, before being separately converted to first audio parameter, the corresponding location parameter of at least one object described in the described first environment also comprises step:
The position of described at least one object of monitoring obtains monitoring result;
Judge whether described monitoring result shows that variation has taken place in the position of described at least one object;
When judged result when being, obtain the location parameter after described at least one object upgrades respectively;
The step that the corresponding location parameter of at least one object described in the described first environment is separately converted to first audio parameter comprises: the location parameter of the corresponding renewal of at least one object described in the described first environment is separately converted to first audio parameter.
Preferable, the step of obtaining the environmental information of first environment comprises:
Gather the image information of described first environment;
Described image information is handled, obtained the environmental information of described first environment.
Preferable, the step of obtaining the environmental information of first environment comprises:
Described first environment is surveyed;
Obtain the environmental information of described first environment by described detection.
A kind of information output apparatus comprises:
Acquisition module, the environmental information for obtaining first environment comprises at least one object in the described first environment, described at least one object has the location parameter corresponding with it respectively;
Processing module is used for the corresponding location parameter of at least one object described in the described first environment is separately converted to first audio parameter;
Output module is used for exporting first acoustic information according to described first audio parameter.
Preferable, described at least one object in the described first environment has the vision parameter corresponding with it respectively; Described processing module also is used for the corresponding vision parameter of described at least one object is separately converted to second audio parameter; Described output module also is used for according to described first audio parameter and described second audio parameter, exports described first acoustic information and second acoustic information.
Preferable, described acquisition module also is used for obtaining the corresponding vision parameter of described at least one object; Described vision parameter comprises visual color and/or visual shape at least; The described vision parameter that described processing module also is used for obtaining is separately converted to second audio parameter.
Preferable, described processing module also comprises:
First acquiring unit for the corresponding relation set of obtaining described vision parameter and described second audio parameter, comprises the corresponding relation of at least one described vision parameter and described second audio parameter in the described corresponding relation set;
Determining unit is used for gathering the corresponding relation that comprises according to the described vision parameter that obtains and described corresponding relation, determines described second audio parameter.
Preferable, described first audio parameter comprises the orientation at least, described second audio parameter comprises one or more in frequency, volume and the tone color at least.
Preferable, described acquisition module also comprises:
Monitoring means for the position of described at least one object of monitoring, obtains monitoring result;
Judging unit is used for judging whether described monitoring result shows that variation has taken place in the position of described at least one object;
Second acquisition unit, be used for when judged result when being, obtain the location parameter after described at least one object renewal respectively;
Described processing module also is used for the location parameter of the corresponding renewal of at least one object described in the described first environment is separately converted to first audio parameter.
Preferable, described acquisition module also comprises:
Collecting unit, the image information that is used for gathering described first environment;
Second acquisition unit is used for described image information is handled, and obtains the environmental information of described first environment.
Preferable, described acquisition module also comprises:
Monitoring means is used for described first environment is surveyed;
Second acquisition unit is for the environmental information that obtains described first environment by described detection.
A kind of electronic equipment comprises:
Fixed cell is used for described device is fixed on one's body the user;
Collecting unit is positioned at the place ahead after the user dresses equipment, is used for gathering and handling the environmental information of first environment;
Processing unit is used for the corresponding location parameter of at least one object described in the described first environment is separately converted to first audio parameter;
The voice output unit is positioned near the ear after the user dresses the above device, is used for output information.
Information output method in the embodiment of the invention is the environmental information of obtaining first environment, comprises at least one object in the described first environment, and described at least one object has the location parameter corresponding with it respectively; The corresponding location parameter of at least one object described in the described first environment is separately converted to first audio parameter; Export first acoustic information according to described first audio parameter.The location parameter of object correspondence in the environment is converted into first audio parameter to be exported again, thereby make the blind person can pass through the voice signal perception position of object on every side, object is positioned, better for the blind person plays navigation function, for the user provides a kind of environment cognitive media.
Description of drawings
Fig. 1 is the main process flow diagram of information output method in the embodiment of the invention;
Fig. 2 is the primary structure figure of a kind of information output apparatus in the embodiment of the invention;
Fig. 3 is the primary structure figure of another kind of information output apparatus in the embodiment of the invention.
Embodiment
Information output method in the embodiment of the invention is the environmental information of obtaining first environment, comprises at least one object in the described first environment, and described at least one object has the location parameter corresponding with it respectively; The corresponding location parameter of at least one object described in the described first environment is separately converted to first audio parameter; Export first acoustic information according to described first audio parameter.The location parameter of object correspondence in the environment is converted into first audio parameter to be exported again, thereby make the blind person can pass through the voice signal perception position of object on every side, object is positioned, better for the blind person plays navigation function, for the user provides a kind of environment cognitive media.
Referring to Fig. 1, the main method flow process of information output is as follows in the embodiment of the invention:
Step 101: obtain the environmental information of first environment, comprise at least one object in the described first environment, described at least one object has the location parameter corresponding with it respectively.
In the embodiment of the invention, can obtain the environmental information of first environment by equipment such as camera, camera, sensor or radars.
For example, can obtain the video information, image information, motion track information of object in the surrounding environment etc. by camera, camera, at least can determine the positional information of object by video information or image information, can determine the change in location of object by motion track information, the image information, positional information, velocity information, motion track information of object in the surrounding environment etc. can be obtained by sensor, the positional information, velocity information, motion track information of object in the surrounding environment etc. can be obtained by radar.Wherein, can gather the image information of described first environment by camera or camera, more described image information be handled, obtain the environmental information of described first environment.Perhaps can survey described first environment by sensor or radar, obtain the parameter information of at least one object of comprising in the described first environment, the parameter information of at least one object that comprises in the first environment is namely formed the environmental information of described first environment, and then sensor or radar can be by surveying the environmental information that obtains described first environment.Wherein, comprise at least one object in the first environment, described parameter information comprises location parameter and vision parameter at least.
In obtaining described first environment behind the location parameter of at least one object, can also monitor the position of described at least one object, obtain a monitoring result, judge whether described monitoring result shows that variation has taken place in the position of described at least one object, when judged result when being, obtain the location parameter after described at least one object upgrades respectively, location parameter after this renewal can be represented the new position of described at least one object, perhaps can represent described at least one motion of objects track.
Can determine the location parameter of each included in described first environment object according to the environmental information of obtaining at least, location parameter can be determined according to positional information.If comprise that visual information (for example in the described environmental information of obtaining, video or image can be visual information), then can also determine the vision parameter of special object according to the visual information of obtaining, described vision parameter comprises visual color or visual shape at least, can also determine the attribute information of special object according to the visual information of obtaining, determine namely what object special object is specially.
Step 102: the corresponding location parameter of at least one object described in the described first environment is separately converted to first audio parameter.
If make its location parameter that renewal arranged because there is the position of object to take place to change, then this step is that the location parameter after the described object renewal is converted into first audio parameter.Wherein, first audio parameter comprises orientation, i.e. sound bearing at least.
For example, comprise in the first environment that an object is an automobile, it is positioned at the right front of described information output apparatus, do not change by monitoring the position of determining this automobile, then the corresponding location parameter of this automobile can be converted into first audio parameter, sound bearing in this first audio parameter is the right front, for example finally export first acoustic information by earphone, then can export corresponding first acoustic information of this automobile in the forward position of auris dextra earphone, be positioned at the right front to represent this automobile.Location parameter described in the embodiment of the invention is only paid close attention to the particular location of a certain object, or motion track information, specifically is what object and do not pay close attention to this object, does not also pay close attention to the information such as shape, color of this object.The tone color of first acoustic information can be in advance if select at random.
If N object arranged also to vision parameter should be arranged in the first environment, then can also obtain the corresponding vision parameter of described at least one object, the corresponding vision parameter of described at least one object is separately converted to second audio parameter, wherein, described second audio parameter comprises one or more in audio frequency (being sound frequency), volume (being the sound size) and the tone color at least, and N is not less than 0 integer.When the corresponding vision parameter of described at least one object is separately converted to second audio parameter, can obtain the corresponding relation set of described vision parameter and described second audio parameter, the corresponding relation that comprises at least one described vision parameter and described second audio parameter in the described corresponding relation set, for example, the corresponding relation that can comprise color and audio frequency in the described corresponding relation set, the corresponding relation of shape and volume, the corresponding relation of object properties and tone color, etc., can be according to the corresponding relation that comprises in the described vision parameter that obtains and the set of described corresponding relation, determine described second audio parameter, for example, the vision parameter that obtains comprises color, and the corresponding relation of color and audio frequency is determined the audio frequency in described second audio parameter in then can gathering according to described corresponding relation.
Can be by the shape of sound frequency indicated object in the embodiment of the invention, by the color of sound size indicated object, for example, the height of this point of the sound frequency of certain point and object is directly proportional, the sound size of certain point is directly proportional with the shade of this point of object, and the more dark then sound of color is more big.Perhaps the shade of this point of sound frequency and object that also can certain point is directly proportional, and the more dark then sound frequency of color is more high, and the sound size of certain point is directly proportional with the height of this point of object.Be that example describes with sound frequency corresponding objects shape, sound size corresponding objects color in the embodiment of the invention.
For example, the described automobile that is positioned at described information output apparatus right front is roughly rectangular parallelepiped, can characterize its shape by the sound frequency of difference, characterizing its color by the sound size is black, when output, can at first export first acoustic information, make the user know the particular location of this object, export second acoustic information again, make the user know information such as the concrete shape of this object and color, perhaps first acoustic information and second acoustic information can be exported simultaneously, perhaps also first acoustic information can be only exported.
If also determined the attribute information of special object according to vision parameter, then can characterize this attribute information by tone color, preferable, if this object can sounding, then can characterize this object by this sound, with more imagery.For example, this special object is automobile, and then the tone color in second audio parameter can be honk, and namely the tone color of second acoustic information can be honk.If perhaps do not determine the attribute information of special object according to vision parameter, perhaps this special object itself can not sounding, then can or select tone color at random for the default tone color of second sound message breath, for example, can preset by singing of the stream and characterize automobile, characterize buildings by piano sound, etc., perhaps also can select at random, but in order to make the user in long-term the use, can be accustomed to, the mode that is adopted as the default tone color of every kind of different objects is humanized, and the easier user of order remembers.Even or this special object can sounding, also can or select tone color at random for the default tone color of corresponding second acoustic information of this special object, when default tone color, not necessarily the tone color of described second acoustic information to be set at the sounding tone color of this special object.
Step 103: export first acoustic information according to described first audio parameter.
Wherein, the tone color of first acoustic information can be pre-if select at random.For example, there is a bicycle to cross from the user at one's side, its motion track is that left back by the user is to the left front, then can gather this motion track information, be translated into first audio parameter, export first acoustic information according to this first audio parameter, for example export this first acoustic information by earphone, then in when output, the sound that can make output moves to the left front from the left back of left earphone, has object to move to the left front from user's right and left with sign.First acoustic information is used for position or the motion track of expression object.
Can also export second acoustic information according to described second audio parameter.Wherein, the tone color of second acoustic information can be default or select at random.For example, a cylindrical carton is arranged on the user right side, determined that corresponding first audio parameter of this carton (sound bearing is the right side) and second audio parameter (are shaped as cylindrical, color is red), then can at first export first acoustic information according to first audio parameter, export second acoustic information according to second audio parameter again, output so successively can be avoided the sound confusion heard, perhaps also can export described first acoustic information and described second acoustic information simultaneously.Second acoustic information is used for shape and/or the color of expression object, and second acoustic information can also be used for the attribute of expression object, and namely what object this object is specially.
Below introduce information output method in the embodiment of the invention by several specific embodiments.
Embodiment one:
Survey first environment by sensor.Sensor is surveyed described first environment, obtain the parameter information of at least one object in the described first environment, in the embodiment of the invention, comprise an object in the described first environment, the parameter information of this object is location parameter, and this location parameter shows that this object is positioned at the right back.After obtaining this location parameter, can continue described first environment is monitored, object moves described in the embodiment of the invention, then can know that variation has taken place in the position of this object according to monitoring result, then can obtain the location parameter after this object upgrades, this location parameter can be represented the position after this object upgrades, and the position after this object upgrades is the right front, perhaps also can represent this motion of objects track, this motion of objects track is from the right back to the right front.This object motion is very fast in the embodiment of the invention, and sensor has only captured this motion of objects track, and does not capture the out of Memory of this object, therefore can't obtain the vision parameter of this object.Location parameter after upgrading is converted into first audio parameter, exports first acoustic information according to described first audio parameter.Wherein, export by earphone in the embodiment of the invention, be position after this object upgrades if the location parameter after upgrading is represented, this first acoustic information of output in the forward sound channel of earphone on the right then, be motion track information after this object upgrades if the location parameter after upgrading is represented, then when output, this first acoustic information can be exported from the rear of the right earphone and progressively be moved to the place ahead output, be to the right front from the right back to represent this motion of objects track.
Embodiment two:
Image information by camera is gathered described first environment comprises an object in the first environment described in the embodiment of the invention, then the camera collection is image information about this object.Described image information is handled, obtained the environmental information of described first environment.Comprise location parameter in the parameter information of the object that obtains, this location parameter shows that this object is positioned at the left back.After obtaining this location parameter, can continue described first environment is monitored, monitoring result shows that this object remains static in the embodiment of the invention, and its position does not change, and then can be directly the location parameter of described object correspondence be converted into described first audio parameter.Because be the image information by the described first environment of camera collection, therefore can also comprise vision parameter in the parameter information of this object that obtains, obtain the corresponding relation set of described vision parameter and described second audio parameter, the corresponding relation that comprises at least one described vision parameter and described second audio parameter in the described corresponding relation set, include the corresponding relation of shape and volume and the corresponding relation of color and audio frequency in the set of corresponding relation described in the embodiment of the invention, according to the corresponding relation that comprises in the described vision parameter that obtains and the set of described corresponding relation, determine described second audio parameter.Do not determine the attribute of this object in the embodiment of the invention according to the image information of gathering, namely and which kind of object is indifferent to this object be.Be to export by earphone in the embodiment of the invention, then can export described first acoustic information and described second acoustic information in the position after the leaning on of left side earphone, can export simultaneously, also can timesharing output.Wherein, the tone color of described second acoustic information can be default tone color, also can select at random.
Embodiment three:
By the image information of the described first environment of camera collection, comprise an object described in the embodiment of the invention in the first environment, then camera collection is image information about this object.Described image information is handled, obtained the environmental information of described first environment.Comprise location parameter in the parameter information of the object that obtains, this location parameter shows that this object is positioned at the left front.After obtaining this location parameter, can continue described first environment is monitored, monitoring result shows that this object remains static in the embodiment of the invention, and its position does not change, and then can be directly the location parameter of described object correspondence be converted into described first audio parameter.Because be the image information by the described first environment of camera collection, therefore can also comprise vision parameter in the parameter information of this object that obtains, obtain the corresponding relation set of described vision parameter and described second audio parameter, the corresponding relation that comprises at least one described vision parameter and described second audio parameter in the described corresponding relation set, include the corresponding relation of shape and volume and the corresponding relation of color and audio frequency in the set of corresponding relation described in the embodiment of the invention, according to the corresponding relation that comprises in the described vision parameter that obtains and the set of described corresponding relation, determine described second audio parameter.Determined the attribute of this object in the embodiment of the invention according to the image information of gathering, namely this object is motorcycle.Be to export by earphone in the embodiment of the invention, then can export described first acoustic information and described second acoustic information in the position after the leaning on of left side earphone, can export simultaneously, also can timesharing output.Wherein, the tone color of described second acoustic information can be default tone color, should default tone color can be the tucket of motorcycle for example, can make the listener directly determine that according to this sound which kind of object this object is, perhaps this tone color also can be selected at random.
Referring to Fig. 2, the embodiment of the invention provides a kind of information output apparatus, and it can comprise acquisition module 201, processing module 202 and output module 203.
Acquisition module 201 is used for obtaining the environmental information of first environment, comprises at least one object in the described first environment, and described at least one object has the location parameter corresponding with it respectively.Acquisition module 201 can also be used for obtaining the corresponding vision parameter of described at least one object; Described vision parameter comprises visual color and/or visual shape at least.
Can also comprise monitoring means, judging unit, second acquisition unit and collecting unit in the acquisition module 201.
Monitoring means can be used for the position of described at least one object of monitoring, obtains monitoring result.Monitoring means can also be used for described first environment is surveyed.
Judging unit can be used for judging whether described monitoring result shows that variation has taken place in the position of described at least one object.
Second acquisition unit can be used for when judged result when being, obtain the location parameter after described at least one object renewal respectively.Second acquisition unit can also be used for described image information is handled, and obtains the environmental information of described first environment.Second acquisition unit can also be used for obtaining by described detection the environmental information of described first environment, namely obtains the environmental information of described first environment by the detection of monitoring means.
Collecting unit can be used for gathering the image information of described first environment.
Processing module 202 is used for the corresponding location parameter of at least one object described in the described first environment is separately converted to first audio parameter.Wherein, first audio parameter comprises the sound bearing at least.Processing module 202 can also be used for the corresponding vision parameter of described at least one object is separately converted to second audio parameter.The described vision parameter that processing module 202 can also be used for acquisition module 201 is obtained is separately converted to second audio parameter.First audio parameter comprises the orientation at least described in the embodiment of the invention, and described second audio parameter comprises one or more in frequency, volume and the tone color at least.Processing module 202 can also be used for the location parameter of the corresponding renewal of at least one object described in the described first environment is separately converted to first audio parameter.
Can also comprise first acquiring unit and determining unit in the processing module 202.
First acquiring unit can be used for obtaining the corresponding relation set of described vision parameter and described second audio parameter, comprises the corresponding relation of at least one described vision parameter and described second audio parameter in the described corresponding relation set;
Determining unit can be used for gathering the corresponding relation that comprises according to the described vision parameter that obtains and described corresponding relation, determines described second audio parameter.
Output module 203 is used for exporting first acoustic information according to described first audio parameter.Output module 203 can also be used for according to described first audio parameter and described second audio parameter, exports described first acoustic information and second acoustic information.
Referring to Fig. 3, the embodiment of the invention also provides a kind of information output apparatus, and described device comprises fixed cell 301, collecting unit 302, voice output unit 303 and processing unit 304.Described device can be an entity physical equipment, can be Wearable equipment for example, as helmet-type, and perhaps earphone-type etc.
Fixed cell 301 is used for described device is fixed on one's body the user.For example, the whole housing of the helmet is fixed cell 301, and the In-Ear earplug of non-headphone is fixed cell 301, and it can make described information output apparatus be fixed on one's body the user, and is user-friendly.
Collecting unit 302 is positioned at the place ahead after the user dresses equipment, is used for gathering and handling the environmental information of described first environment.Collecting unit 302 in the embodiment of the invention can be equipment such as camera, camera, sensor, radar.For example, collecting unit 302 can be gathered the image information of described first environment, and described image information is handled, and obtains the environmental information of described first environment.Perhaps, collecting unit 302 can be surveyed described first environment, obtains the environmental information of described first environment by described detection.
Voice output unit 303 is positioned near the ear after the user dresses the above device, is used for output information.Voice output unit 303 can be equipment such as loudspeaker in the embodiment of the invention.Described first acoustic information and described second acoustic information can be exported to the user in voice output unit 303.
Processing unit 304 is used for the corresponding location parameter of at least one object described in the described first environment is separately converted to first audio parameter, can also be used for the corresponding vision parameter of at least one object described in the described first environment is separately converted to second audio parameter.
Described device can also comprise storage unit 305, and it can be used for the corresponding relation set of storage vision parameter and audio parameter, can also store the default tone color of corresponding object.
Described device is gathered the user towards the information in zone, after the information of gathering is converted into acoustic information, export to the user by voice output unit 303, watch the world around as the ordinary people by eyes, described device can make the user pass through ear " listen " to around the world.
Wherein, the information output apparatus in the embodiment of the invention can be corresponding with the information output apparatus shown in Fig. 2.For example, the collecting unit 302 of information output apparatus can be corresponding to the acquisition module 201 of information output apparatus among Fig. 2 in the embodiment of the invention, namely in the embodiment of the invention collecting unit 302 of information output apparatus can finish with Fig. 2 in the acquisition module 201 identical or corresponding functions of information output apparatus; The voice output unit 303 of information output apparatus can be corresponding to the output module 203 of information output apparatus among Fig. 2 in the embodiment of the invention, namely in the embodiment of the invention voice output unit 303 of information output apparatus can finish with Fig. 2 in the output module 203 identical or corresponding functions of information output apparatus; The processing unit 304 of information output apparatus can be corresponding to the processing module 202 of information output apparatus among Fig. 2 in the embodiment of the invention, perhaps the processing unit 304 of information output apparatus and storage unit 305 can be corresponding to the processing modules 202 of information output apparatus among Fig. 2 in the embodiment of the invention, be the processing unit 304 of information output apparatus in the embodiment of the invention can finish with Fig. 2 in the processing unit 202 identical or corresponding functions of information output apparatus, or in the embodiment of the invention processing unit 304 and the storage unit 305 of information output apparatus can finish with Fig. 2 in the processing unit 202 identical or corresponding functions of information output apparatus.That is, the information output apparatus in the embodiment of the invention can be equivalent to the entity apparatus of the information output apparatus shown in Fig. 2.
Information output method in the embodiment of the invention is the environmental information of obtaining first environment, comprises at least one object in the described first environment, and described at least one object has the location parameter corresponding with it respectively; The corresponding location parameter of at least one object described in the described first environment is separately converted to first audio parameter; Export first acoustic information according to described first audio parameter.The location parameter of object correspondence in the environment is converted into first audio parameter to be exported again, thereby make the blind person can pass through the voice signal perception position of object on every side, object is positioned, better for the blind person plays navigation function, for the user provides a kind of environment cognitive media.
The vision parameter of object can also be converted into second audio parameter exports, thereby not only make the blind person can know position or the movement locus of object, can also know the information such as shape, color of object, can know for sure specifically is any object, thereby shows surrounding environment to the blind person more intuitively.Just as the ordinary people can visually see All Around The World, the blind person " is listened " to the world around, more more powerful than common blind guide function.
Those skilled in the art should understand that embodiments of the invention can be provided as method, system or computer program.Therefore, the present invention can adopt complete hardware embodiment, complete software embodiment or in conjunction with the form of the embodiment of software and hardware aspect.And the present invention can adopt the form of the computer program of implementing in one or more computer-usable storage medium (including but not limited to magnetic disk memory and optical memory etc.) that wherein include computer usable program code.
The present invention is that reference is described according to process flow diagram and/or the block scheme of method, equipment (system) and the computer program of the embodiment of the invention.Should understand can be by the flow process in each flow process in computer program instructions realization flow figure and/or the block scheme and/or square frame and process flow diagram and/or the block scheme and/or the combination of square frame.Can provide these computer program instructions to the processor of multi-purpose computer, special purpose computer, Embedded Processor or other programmable data processing device to produce a machine, make the instruction of carrying out by the processor of computing machine or other programmable data processing device produce to be used for the device of the function that is implemented in flow process of process flow diagram or a plurality of flow process and/or square frame of block scheme or a plurality of square frame appointments.
These computer program instructions also can be stored in energy vectoring computer or the computer-readable memory of other programmable data processing device with ad hoc fashion work, make the instruction that is stored in this computer-readable memory produce the manufacture that comprises command device, this command device is implemented in the function of appointment in flow process of process flow diagram or a plurality of flow process and/or square frame of block scheme or a plurality of square frame.
These computer program instructions also can be loaded on computing machine or other programmable data processing device, make and carry out the sequence of operations step producing computer implemented processing at computing machine or other programmable devices, thereby be provided for being implemented in the step of the function of appointment in flow process of process flow diagram or a plurality of flow process and/or square frame of block scheme or a plurality of square frame in the instruction that computing machine or other programmable devices are carried out.
Obviously, those skilled in the art can carry out various changes and modification to the present invention and not break away from the spirit and scope of the present invention.Like this, if of the present invention these are revised and modification belongs within the scope of claim of the present invention and equivalent technologies thereof, then the present invention also is intended to comprise these changes and modification interior.

Claims (17)

1. an information output method is characterized in that, may further comprise the steps:
Obtain the environmental information of first environment, comprise at least one object in the described first environment, described at least one object has the location parameter corresponding with it respectively;
The corresponding location parameter of at least one object described in the described first environment is separately converted to first audio parameter;
Export first acoustic information according to described first audio parameter.
2. the method for claim 1 is characterized in that, described at least one object in the described first environment has the vision parameter corresponding with it respectively;
After being separately converted to first audio parameter, the corresponding location parameter of at least one object described in the described first environment also comprises step: the corresponding vision parameter of described at least one object is separately converted to second audio parameter;
The step of exporting first acoustic information according to described first audio parameter comprises: according to described first audio parameter and described second audio parameter, export described first acoustic information and second acoustic information.
3. method as claimed in claim 2 is characterized in that, the step that the corresponding vision parameter of described at least one object is separately converted to second audio parameter comprises:
Obtain the corresponding vision parameter of described at least one object; Described vision parameter comprises visual color and/or visual shape at least;
The described vision parameter that obtains is separately converted to second audio parameter.
4. method as claimed in claim 3 is characterized in that, the step that the described vision parameter that obtains is separately converted to second audio parameter comprises:
Obtain the corresponding relation set of described vision parameter and described second audio parameter, comprise the corresponding relation of at least one described vision parameter and described second audio parameter in the described corresponding relation set;
According to the corresponding relation that comprises in the described vision parameter that obtains and the set of described corresponding relation, determine described second audio parameter.
5. the method for claim 1 is characterized in that, described first audio parameter comprises the orientation at least, and described second audio parameter comprises one or more in frequency, volume and the tone color at least.
6. the method for claim 1 is characterized in that, also comprises step before the corresponding location parameter of at least one object described in the described first environment is separately converted to first audio parameter:
The position of described at least one object of monitoring obtains monitoring result;
Judge whether described monitoring result shows that variation has taken place in the position of described at least one object;
When judged result when being, obtain the location parameter after described at least one object upgrades respectively;
The step that the corresponding location parameter of at least one object described in the described first environment is separately converted to first audio parameter comprises: the location parameter of the corresponding renewal of at least one object described in the described first environment is separately converted to first audio parameter.
7. the method for claim 1 is characterized in that, the step of obtaining the environmental information of first environment comprises:
Gather the image information of described first environment;
Described image information is handled, obtained the environmental information of described first environment.
8. the method for claim 1 is characterized in that, the step of obtaining the environmental information of first environment comprises:
Described first environment is surveyed;
Obtain the environmental information of described first environment by described detection.
9. an information output apparatus is characterized in that, comprising:
Acquisition module, the environmental information for obtaining first environment comprises at least one object in the described first environment, described at least one object has the location parameter corresponding with it respectively;
Processing module is used for the corresponding location parameter of at least one object described in the described first environment is separately converted to first audio parameter;
Output module is used for exporting first acoustic information according to described first audio parameter.
10. device as claimed in claim 9 is characterized in that, described at least one object in the described first environment has the vision parameter corresponding with it respectively; Described processing module also is used for the corresponding vision parameter of described at least one object is separately converted to second audio parameter; Described output module also is used for according to described first audio parameter and described second audio parameter, exports described first acoustic information and second acoustic information.
11. device as claimed in claim 10 is characterized in that, described acquisition module also is used for obtaining the corresponding vision parameter of described at least one object; Described vision parameter comprises visual color and/or visual shape at least; The described vision parameter that described processing module also is used for obtaining is separately converted to second audio parameter.
12. device as claimed in claim 11 is characterized in that, described processing module also comprises:
First acquiring unit for the corresponding relation set of obtaining described vision parameter and described second audio parameter, comprises the corresponding relation of at least one described vision parameter and described second audio parameter in the described corresponding relation set;
Determining unit is used for gathering the corresponding relation that comprises according to the described vision parameter that obtains and described corresponding relation, determines described second audio parameter.
13. device as claimed in claim 9 is characterized in that, described first audio parameter comprises the orientation at least, and described second audio parameter comprises one or more in frequency, volume and the tone color at least.
14. device as claimed in claim 9 is characterized in that, described acquisition module also comprises:
Monitoring means for the position of described at least one object of monitoring, obtains monitoring result;
Judging unit is used for judging whether described monitoring result shows that variation has taken place in the position of described at least one object;
Second acquisition unit, be used for when judged result when being, obtain the location parameter after described at least one object renewal respectively;
Described processing module also is used for the location parameter of the corresponding renewal of at least one object described in the described first environment is separately converted to first audio parameter.
15. device as claimed in claim 9 is characterized in that, described acquisition module also comprises:
Collecting unit, the image information that is used for gathering described first environment;
Second acquisition unit is used for described image information is handled, and obtains the environmental information of described first environment.
16. device as claimed in claim 9 is characterized in that, described acquisition module also comprises:
Monitoring means is used for described first environment is surveyed;
Second acquisition unit is for the environmental information that obtains described first environment by described detection.
17. an electronic equipment is characterized in that, comprising:
Fixed cell is used for described device is fixed on one's body the user;
Collecting unit is positioned at the place ahead after the user dresses equipment, is used for gathering and handling the environmental information of first environment;
Processing unit is used for the corresponding location parameter of at least one object described in the described first environment is separately converted to first audio parameter;
The voice output unit is positioned near the ear after the user dresses the above device, is used for output information.
CN201210046679.9A 2012-02-24 2012-02-24 A kind of information output method, device and electronic equipment Active CN103294880B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210046679.9A CN103294880B (en) 2012-02-24 2012-02-24 A kind of information output method, device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210046679.9A CN103294880B (en) 2012-02-24 2012-02-24 A kind of information output method, device and electronic equipment

Publications (2)

Publication Number Publication Date
CN103294880A true CN103294880A (en) 2013-09-11
CN103294880B CN103294880B (en) 2016-12-14

Family

ID=49095737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210046679.9A Active CN103294880B (en) 2012-02-24 2012-02-24 A kind of information output method, device and electronic equipment

Country Status (1)

Country Link
CN (1) CN103294880B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107301773A (en) * 2017-06-16 2017-10-27 上海肇观电子科技有限公司 A kind of method and device to destination object prompt message

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2248956Y (en) * 1995-12-21 1997-03-05 山东师范大学 Hand holding type multifunction device for guiding blind person
CN2618235Y (en) * 2003-04-25 2004-05-26 赵舜培 Books with self-identifying content
CN101385677A (en) * 2008-10-16 2009-03-18 上海交通大学 Blind guiding method and device based on moving body track
CN201404415Y (en) * 2009-05-07 2010-02-17 蒋清晓 Ultrasonic short-range multi-directional environment reconstruction system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2248956Y (en) * 1995-12-21 1997-03-05 山东师范大学 Hand holding type multifunction device for guiding blind person
CN2618235Y (en) * 2003-04-25 2004-05-26 赵舜培 Books with self-identifying content
CN101385677A (en) * 2008-10-16 2009-03-18 上海交通大学 Blind guiding method and device based on moving body track
CN201404415Y (en) * 2009-05-07 2010-02-17 蒋清晓 Ultrasonic short-range multi-directional environment reconstruction system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107301773A (en) * 2017-06-16 2017-10-27 上海肇观电子科技有限公司 A kind of method and device to destination object prompt message

Also Published As

Publication number Publication date
CN103294880B (en) 2016-12-14

Similar Documents

Publication Publication Date Title
CN105103457B (en) Portable terminal, audiphone and in portable terminal indicate sound source position method
CN108156561B (en) Audio signal processing method and device and terminal
US11482237B2 (en) Method and terminal for reconstructing speech signal, and computer storage medium
CN109445572A (en) The method, graphical user interface and terminal of wicket are quickly recalled in full screen display video
CN106445219A (en) Mobile terminal and method for controlling the same
CN108881568B (en) Method and device for sounding display screen, electronic device and storage medium
CN105874408A (en) Gesture interactive wearable spatial audio system
US9760998B2 (en) Video processing method and apparatus
WO2017092396A1 (en) Virtual reality interaction system and method
CN111246300A (en) Method, device and equipment for generating clip template and storage medium
CN110708630B (en) Method, device and equipment for controlling earphone and storage medium
CN114189790B (en) Audio information processing method, electronic device, system, product and medium
CN111370025A (en) Audio recognition method and device and computer storage medium
CN110493635B (en) Video playing method and device and terminal
CN117334207A (en) Sound processing method and electronic equipment
CN109039355A (en) Phonetic prompt method and Related product
KR20160070529A (en) Wearable device
CN103294880A (en) Information output method and device as well as electronic device
KR20150029197A (en) Mobile terminal and operation method thereof
CN112927718B (en) Method, device, terminal and storage medium for sensing surrounding environment
CN113997863B (en) Data processing method and device and vehicle
CN117311490A (en) Wrist-worn device control method, related system and storage medium
KR101727900B1 (en) Mobile terminal and operation control method thereof
CN114302278A (en) Headset wearing calibration method, electronic device and computer-readable storage medium
CN114238859A (en) Data processing system, method, electronic device, and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant