CN103294880B - A kind of information output method, device and electronic equipment - Google Patents

A kind of information output method, device and electronic equipment Download PDF

Info

Publication number
CN103294880B
CN103294880B CN201210046679.9A CN201210046679A CN103294880B CN 103294880 B CN103294880 B CN 103294880B CN 201210046679 A CN201210046679 A CN 201210046679A CN 103294880 B CN103294880 B CN 103294880B
Authority
CN
China
Prior art keywords
parameter
environment
vision
audio
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210046679.9A
Other languages
Chinese (zh)
Other versions
CN103294880A (en
Inventor
张浦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201210046679.9A priority Critical patent/CN103294880B/en
Publication of CN103294880A publication Critical patent/CN103294880A/en
Application granted granted Critical
Publication of CN103294880B publication Critical patent/CN103294880B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Stereophonic System (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The invention discloses a kind of information output method, for providing a kind of Context aware medium for user.Described method includes: obtaining the environmental information of first environment, described first environment includes that at least one object, at least one object described have corresponding location parameter respectively;Location parameter corresponding at least one object described in described first environment is separately converted to the first audio parameter;The first acoustic information is exported according to described first audio parameter.The invention also discloses the device for realizing described method and electronic equipment.

Description

A kind of information output method, device and electronic equipment
Technical field
The present invention relates to signal processing technology, particularly to a kind of information output method, device and electronic equipment.
Background technology
In actual life, blind person needs when walking alone to rely on blind-guide device to assist walking.
Occur in that various electronics blind-guide device in the market, wherein, more typically include that there is guiding function Mobile phone or other hand-held class blind guide.
Such as, blind guide can prestore different blind marks, wraps when determining in the image that blind person takes When including the blind mark prestored, by voice, blind person is pointed out, or blind guide is obtained by sensor Figure and signal, pointed out blind person by voice.
Inventor, during realizing the embodiment of the present application technical scheme, at least finds to exist in prior art Following technical problem:
Prior art needs prestore different blind marks, in addition it is also necessary to the object of environment periphery is carried out Label indicates, relatively complicated, and the text-to-speech of the title and position being only user offer object describes, And user can not be made to have visual understanding.
Summary of the invention
The embodiment of the present invention provides a kind of information output method, device and electronic equipment, for carrying for user For a kind of Context aware medium.
A kind of information output method, comprises the following steps:
Obtaining the environmental information of first environment, described first environment includes at least one object, described at least One object has corresponding location parameter respectively;
Location parameter corresponding at least one object described in described first environment is separately converted to first Audio parameter;
The first acoustic information is exported according to described first audio parameter.
It is also preferred that the left at least one object described in described first environment has corresponding vision to join respectively Number;
Location parameter corresponding at least one object described in described first environment is being separately converted to Further comprise the steps of: after one audio parameter and the vision parameter corresponding at least one object described is converted respectively It it is the second audio parameter;
Include according to the step that described first audio parameter exports the first acoustic information: according to described first sound Parameter and described second audio parameter, export described first acoustic information and the second acoustic information.
It is also preferred that the left the vision parameter corresponding at least one object described is separately converted to the second audio parameter Step include:
Obtain described vision parameter corresponding at least one object;Described vision parameter at least includes vision face Color and/or visual shape;
The described vision parameter obtained is separately converted to the second audio parameter.
It is also preferred that the left the step that the described vision parameter obtained is separately converted to the second audio parameter includes:
Obtain the correspondence set of described vision parameter and described second audio parameter, described set of correspondences Conjunction includes the corresponding relation of vision parameter described at least one and described second audio parameter;
According to the corresponding relation included in the described vision parameter obtained and described correspondence set, determine Described second audio parameter.
It is also preferred that the left described first audio parameter at least includes that orientation, described second audio parameter at least include frequency One or more in rate, volume and tone color.
It is also preferred that the left the location parameter corresponding at least one object described in described first environment is being turned respectively Further comprise the steps of: before turning to the first audio parameter
The position of monitoring at least one object described, it is thus achieved that monitoring result;
Judge described monitoring result whether this is indicate that the position of at least one object described there occurs change;
When judged result is for being, obtain the location parameter after at least one object described updates respectively;
Location parameter corresponding at least one object described in described first environment is separately converted to first The step of audio parameter includes: by the position of the renewal corresponding at least one object described in described first environment Put parameter and be separately converted to the first audio parameter.
It is also preferred that the left the step obtaining the environmental information of first environment includes:
Gather the image information of described first environment;
Described image information is processed, it is thus achieved that the environmental information of described first environment.
It is also preferred that the left the step obtaining the environmental information of first environment includes:
Described first environment is detected;
The environmental information of described first environment is obtained by described detection.
A kind of information output apparatus, including:
Acquisition module, for obtaining the environmental information of first environment, described first environment includes at least one Object, at least one object described has corresponding location parameter respectively;
Processing module, for dividing the location parameter corresponding at least one object described in described first environment It is not converted into the first audio parameter;
Output module, for exporting the first acoustic information according to described first audio parameter.
It is also preferred that the left at least one object described in described first environment has corresponding vision to join respectively Number;Described processing module is additionally operable to the vision parameter corresponding at least one object described is separately converted to Two audio parameters;Described output module is additionally operable to according to described first audio parameter and described second sound ginseng Number, exports described first acoustic information and the second acoustic information.
It is also preferred that the left described acquisition module is additionally operable to obtain described vision parameter corresponding at least one object; Described vision parameter at least includes visual color and/or visual shape;Described processing module is additionally operable to acquisition Described vision parameter is separately converted to the second audio parameter.
It is also preferred that the left described processing module also includes:
First acquiring unit, for obtaining the set of correspondences of described vision parameter and described second audio parameter Closing, described correspondence set includes the right of vision parameter described at least one and described second audio parameter Should be related to;
Determine unit, included according to the described vision parameter obtained and described correspondence set Corresponding relation, determines described second audio parameter.
It is also preferred that the left described first audio parameter at least includes that orientation, described second audio parameter at least include frequency One or more in rate, volume and tone color.
It is also preferred that the left described acquisition module also includes:
Monitoring means, for monitoring the position of at least one object described, it is thus achieved that monitoring result;
Judging unit, for judging described monitoring result whether this is indicate that the position of at least one object described occurs Change;
Second acquisition unit, for when judged result is for being, obtains at least one object described respectively and updates After location parameter;
Described processing module is additionally operable to the renewal corresponding at least one object described in described first environment Location parameter be separately converted to the first audio parameter.
It is also preferred that the left described acquisition module also includes:
Collecting unit, for gathering the image information of described first environment;
Second acquisition unit, for processing described image information, it is thus achieved that the environment of described first environment Information.
It is also preferred that the left described acquisition module also includes:
Monitoring means, for detecting described first environment;
Second acquisition unit, for obtaining the environmental information of described first environment by described detection.
A kind of electronic equipment, including:
Fixed cell, for being fixed on described device with user;
Collecting unit, is located towards the front after user is worn by equipment, is used for gathering and processing first environment Environmental information;
Processing unit, for dividing the location parameter corresponding at least one object described in described first environment It is not converted into the first audio parameter;
Voice output unit, is positioned near the ear after user is worn by described device, is used for exporting information.
Information output method in the embodiment of the present invention is the environmental information obtaining first environment, described first ring Border includes that at least one object, at least one object described have corresponding location parameter respectively;By institute State described in first environment the location parameter corresponding at least one object and be separately converted to the first audio parameter; The first acoustic information is exported according to described first audio parameter.Location parameter corresponding for object in environment is converted It is that the first audio parameter exports again, so that blind person can be right by the position of acoustical signal perception surrounding objects Object positions, and preferably plays navigation function for blind person, provides a kind of Context aware matchmaker for user It is situated between.
Accompanying drawing explanation
Fig. 1 is the broad flow diagram of information output method in the embodiment of the present invention;
Fig. 2 is the primary structure figure of a kind of information output apparatus in the embodiment of the present invention;
Fig. 3 is the primary structure figure of another kind of information output apparatus in the embodiment of the present invention.
Detailed description of the invention
Information output method in the embodiment of the present invention is the environmental information obtaining first environment, described first ring Border includes that at least one object, at least one object described have corresponding location parameter respectively;By institute State described in first environment the location parameter corresponding at least one object and be separately converted to the first audio parameter; The first acoustic information is exported according to described first audio parameter.Location parameter corresponding for object in environment is converted It is that the first audio parameter exports again, so that blind person can be right by the position of acoustical signal perception surrounding objects Object positions, and preferably plays navigation function for blind person, provides a kind of Context aware matchmaker for user It is situated between.
Seeing Fig. 1, in the embodiment of the present invention, the main method flow process of information output is as follows:
Step 101: obtaining the environmental information of first environment, described first environment includes at least one object, At least one object described has corresponding location parameter respectively.
In the embodiment of the present invention, the equipment such as photographic head, camera, sensor or radar can be passed through and obtain first The environmental information of environment.
Such as, by photographic head, camera can obtain the video information of object in surrounding, image information, Motion track information etc., at least may determine that the positional information of object by video information or image information, logical Crossing motion track information and may determine that the change in location of object, it is right to be obtained in surrounding by sensor The image information of elephant, positional information, velocity information, motion track information etc., can obtain week by radar The positional information of object, velocity information, motion track information etc. in collarette border.Wherein it is possible to by shooting The image information of first environment described in head or collected by camera, more described image information is processed, it is thus achieved that institute State the environmental information of first environment.Or by sensor or radar, described first environment can be visited Survey, it is thus achieved that the parameter information of at least one object included in described first environment, first environment is wrapped The parameter information of at least one object contained i.e. forms the environmental information of described first environment, then sensor or thunder Reach the environmental information that can be obtained described first environment by detection.Wherein, first environment includes at least one Individual object, described parameter information at least includes location parameter and vision parameter.
In obtaining described first environment after the location parameter of at least one object, it is also possible to described in monitoring at least The position of one object a, it is thus achieved that monitoring result, it is judged that described monitoring result whether this is indicate that described at least one The position of object there occurs change, when judged result is for being, obtains at least one object described respectively and updates After location parameter, the location parameter after this renewal can represent the new position of at least one object described, Or the movement locus of at least one object described can be represented.
Each object included in described first environment can be at least determined according to the environmental information obtained Location parameter, location parameter can determine according to positional information.If the described environmental information obtained includes Visual information (such as, video or image can be visual informations), then can also be according to the vision obtained Information determines that the vision parameter of special object, described vision parameter at least include visual color or visual shape, The attribute information of special object can also be determined according to the visual information obtained, i.e. determine that special object is specially What object.
Step 102: the location parameter corresponding at least one object described in described first environment is turned respectively Turn to the first audio parameter.
There occurs that change makes its location parameter have renewal if as the position having object, then this step is Location parameter after being updated by described object is converted into the first audio parameter.Wherein, the first audio parameter is at least Including orientation, i.e. sound bearing.
Such as, first environment includes that an object is an automobile, and it is positioned at the right side of described information output apparatus Front, determines that by monitoring the position of this automobile does not changes, then can be by the position corresponding to this automobile Parameter is converted into the first audio parameter, and the sound bearing in this first audio parameter is right front, the most finally Export the first acoustic information by earphone, then can export corresponding to this automobile in the position that auris dextra earphone is forward The first acoustic information, to represent that this automobile is positioned at right front.Location parameter described in the embodiment of the present invention is only Pay close attention to the particular location of a certain object, or motion track information, and and be not concerned with this object specifically what thing Body, is also not concerned with the information such as the shape of this object, color.The tone color of the first acoustic information can preset or with Machine selects.
If first environment there being N number of object also to there being vision parameter, then can also obtain described at least one Vision parameter corresponding to individual object, is separately converted to the vision parameter corresponding at least one object described Second audio parameter, wherein, described second audio parameter at least includes audio frequency (i.e. sound frequency), volume One or more in (i.e. sound size) and tone color, N is the integer not less than 0.By described at least When vision parameter corresponding to one object is separately converted to the second audio parameter, described vision ginseng can be obtained Number and the correspondence set of described second audio parameter, described correspondence set includes at least one institute State the corresponding relation of vision parameter and described second audio parameter, such as, permissible in described correspondence set Corresponding with tone color with the corresponding relation of volume, object properties with the corresponding relation of audio frequency, shape including color Relation, etc., can according to obtain described vision parameter and described correspondence set included in right Should be related to, determining described second audio parameter, such as, the vision parameter of acquisition includes color, then can root The sound in described second audio parameter is determined according to the corresponding relation of color in described correspondence set Yu audio frequency Frequently.
The embodiment of the present invention can be represented by sound frequency the shape of object, represent right by sound size The color of elephant, such as, the height of this point of sound frequency and object of certain point is directly proportional, the sound size of certain point Being directly proportional to the shade of this point of object, the deepest then sound of color is the biggest.Or can also certain sound put The shade of this point of frequency and object is directly proportional, and the deepest then sound frequency of color is the highest, and the sound of certain point is big Little and this point of object height is directly proportional.With sound frequency corresponding objects shape, sound in the embodiment of the present invention Illustrate as a example by size corresponding objects color.
Such as, the described automobile generally cuboid being positioned at described information output apparatus right front, Ke Yitong The sound frequency crossing difference characterizes its shape, and characterizing its color by sound size is black, when output, Can first export the first acoustic information, make user know the particular location of this object, then export the rising tone Message ceases, and makes user know the information such as concrete shape and color of this object, or can export the simultaneously One acoustic information and the second acoustic information, or can also only export the first acoustic information.
If determine the attribute information of special object always according to vision parameter, then can be characterized by tone color This attribute information, if it is also preferred that the left this object then can characterize this object by this sound with sounding, More to visualize.Such as, this special object is automobile, then the tone color in the second audio parameter can be vapour Car tucket, the tone color of the i.e. second acoustic information can be honk.If or joined not according to vision Number determines the attribute informations of special object, or this special object itself can not sounding, then can be the rising tone Message breath is preset tone color or randomly chooses tone color, is characterized automobile by singing of the stream, by piano for example, it is possible to preset Sound characterizes building, etc., or can also randomly choose, but in order to make user energy in life-time service Enough customs, are adopted as the mode that every kind of different objects preset tone color humanized, it is easier to make user enter Row memory.Even if or this special object can be with sounding, it is also possible to the second sound corresponding to this special object Information is preset tone color or randomly chooses tone color, is not necessarily intended to described second acoustic information when default tone color Tone color is set as the sounding tone color of this special object.
Step 103: export the first acoustic information according to described first audio parameter.
Wherein, the tone color of the first acoustic information can be preset or randomly choose.Such as, there is one voluntarily Car crosses at one's side from user, and its motion track is to left front by the left back of user, then can gather This motion track information, is translated into the first audio parameter, exports the first sound according to this first audio parameter Message ceases, such as, export this first acoustic information by earphone, then, when output, can make the sound of output Move to left front from the left back of left earphone, have object to move to left front from the right and left of user to characterize Side.First acoustic information is for representing position or the motion track of object.
The second acoustic information can also be exported according to described second audio parameter.Wherein, second acoustic information Tone color can be to preset or randomly choose.Such as, on the right side of user, there is a cylindrical carton, the most really Determine the first audio parameter (sound bearing is right side) corresponding to this carton and the second audio parameter (is shaped as Cylinder, color is red), then first can export the first acoustic information according to the first audio parameter, then Exporting the second acoustic information according to the second audio parameter, so successively output can avoid the sound heard to mix Random, or described first acoustic information and described second acoustic information can also be exported simultaneously.Rising tone message Breath is for representing shape and/or the color of object, and the second acoustic information can be also used for representing the attribute of object, This object i.e. is specially any object.
The information output method in the embodiment of the present invention is introduced below by way of several specific embodiments.
Embodiment one:
First environment is detected by sensor.Described first environment is detected by sensor, it is thus achieved that described The parameter information of at least one object in one environment, in the embodiment of the present invention, comprises one in described first environment Individual object, the parameter information of this object is location parameter, and this location parameter shows that this object is positioned at right back. After obtaining this location parameter, can continue described first environment is monitored, institute in the embodiment of the present invention State object to move, then can know that the position of this object there occurs change according to monitoring result, then may be used To obtain the location parameter after this object updates, this location parameter can represent the position after the renewal of this object, Position after this object updates is right front, or can also represent the movement locus of this object, this object Movement locus is from right back to right front.In the embodiment of the present invention, this object motion is very fast, and sensor is only caught Grasp the movement locus of this object, and do not captured the out of Memory of this object, therefore cannot obtain this The vision parameter of object.Location parameter after updating is converted into the first audio parameter, according to described first sound Sound parameter exports the first acoustic information.Wherein, the embodiment of the present invention is exported by earphone, if more The position that what the location parameter after Xin represented is after this object updates, then can earphone is forward on the right sound channel This first acoustic information of middle output, if the location parameter after Geng Xining represent be this object update after motion Trace information, then, when output, can export this first acoustic information from the rear of the right earphone and progressively move Export to front, with the movement locus that represents this object for from right back to right front.
Embodiment two:
By the image information of first environment described in collected by camera, described in the embodiment of the present invention in first environment Including an object, then collected by camera is the image information about this object.Described image information is carried out Process, it is thus achieved that the environmental information of described first environment.The parameter information of the object obtained includes location parameter, This location parameter shows that this object is positioned at left back.Obtaining after this location parameter, can continue described the One environment is monitored, and in the embodiment of the present invention, monitoring result shows that this object remains static, its position Do not change, then can directly location parameter corresponding for described object be converted into described first sound ginseng Number.Because being the image information of the described first environment by collected by camera, the ginseng of this object therefore obtained Number information can also include vision parameter, obtain described vision parameter corresponding with described second audio parameter Set of relationship, described correspondence set includes vision parameter described at least one and described second sound ginseng The corresponding relation of number, includes shape corresponding with volume in correspondence set described in the embodiment of the present invention The corresponding relation of relation and color and audio frequency, according to the described vision parameter obtained and described correspondence set Included in corresponding relation, determine described second audio parameter.In the embodiment of the present invention and not according to collection Image information determine the attribute of this object, which kind of object is i.e. not relevant for this object is.The embodiment of the present invention In be to be exported by earphone, then can the rearward position of left-side earphone output described first sound letter Breath and described second acoustic information, can export simultaneously, it is also possible to timesharing exports.Wherein, the described rising tone The tone color of message breath can be to preset tone color, it is also possible to randomly chooses.
Embodiment three:
By the image information of first environment described in camera collection, first environment described in the embodiment of the present invention Include an object, then camera collection is the image information about this object.To described image information Process, it is thus achieved that the environmental information of described first environment.The parameter information of the object obtained includes position Parameter, this location parameter shows that this object is positioned at left front.After obtaining this location parameter, it is right to continue Described first environment is monitored, and in the embodiment of the present invention, monitoring result shows that this object remains static, Its position does not change, then directly location parameter corresponding for described object can be converted into described first Audio parameter.Because being the image information of the described first environment by camera collection, therefore obtain should The parameter information of object can also include vision parameter, obtain described vision parameter and described second sound ginseng The correspondence set of number, described correspondence set includes vision parameter described at least one and described the The corresponding relation of two audio parameters, includes shape and sound in correspondence set described in the embodiment of the present invention The corresponding relation of amount and the corresponding relation of color and audio frequency, according to the described vision parameter obtained and described correspondence Corresponding relation included in set of relationship, determines described second audio parameter.Basis in the embodiment of the present invention The image information gathered determines the attribute of this object, i.e. this object is motorcycle.The embodiment of the present invention is Exported by earphone, then can the rearward position of left-side earphone export described first acoustic information and Described second acoustic information, can export simultaneously, it is also possible to timesharing exports.Wherein, described rising tone message Breath tone color can be preset tone color, such as this default tone color can be the tucket of motorcycle, receipts can be made According to this sound, hearer directly determines which kind of object this object is, or this tone color can also randomly choose.
Seeing Fig. 2, the embodiment of the present invention provides a kind of information output apparatus, its can include acquisition module 201, Processing module 202 and output module 203.
Acquisition module 201 is for obtaining the environmental information of first environment, and described first environment includes at least one Individual object, at least one object described has corresponding location parameter respectively.Acquisition module 201 is all right For obtaining the vision parameter corresponding at least one object described;Described vision parameter at least includes vision face Color and/or visual shape.
Acquisition module 201 can also include monitoring means, judging unit, second acquisition unit and gather single Unit.
Monitoring means may be used for the position of monitoring at least one object described, it is thus achieved that monitoring result.Monitoring is single Unit can be also used for detecting described first environment.
Judging unit may be used for judging described monitoring result whether this is indicate that the position of at least one object described There occurs change.
Second acquisition unit may be used for when judged result is for being, obtains at least one object described respectively more Location parameter after Xin.Second acquisition unit can be also used for processing described image information, it is thus achieved that institute State the environmental information of first environment.Second acquisition unit can be also used for obtaining described first by described detection The environmental information of environment, the i.e. detection by monitoring means obtain the environmental information of described first environment.
Collecting unit may be used for gathering the image information of described first environment.
Processing module 202 is for by the location parameter corresponding at least one object described in described first environment It is separately converted to the first audio parameter.Wherein, the first audio parameter at least includes sound bearing.Processing module 202 can be also used for the vision parameter corresponding at least one object described is separately converted to the second sound ginseng Number.The described vision parameter that processing module 202 can be also used for obtaining acquisition module 201 is separately converted to Second audio parameter.First audio parameter described in the embodiment of the present invention at least includes orientation, the described rising tone Sound parameter at least includes one or more in frequency, volume and tone color.Processing module 202 can be also used for The location parameter of the renewal corresponding at least one object described in described first environment is separately converted to One audio parameter.
Processing module 202 can also include the first acquiring unit and determine unit.
First acquiring unit may be used for obtaining described vision parameter pass corresponding with described second audio parameter Assembly is closed, and described correspondence set includes vision parameter described at least one and described second audio parameter Corresponding relation;
Determine unit to may be used for according in the described vision parameter obtained and described correspondence set to be wrapped The corresponding relation contained, determines described second audio parameter.
Output module 203 is for exporting the first acoustic information according to described first audio parameter.Output module 203 can be also used for, according to described first audio parameter and described second audio parameter, exporting described first sound Message breath and the second acoustic information.
Seeing Fig. 3, the embodiment of the present invention also provides for a kind of information output apparatus, and described device includes fixing single Unit 301, collecting unit 302, voice output unit 303 and processing unit 304.Described device can be one Individual entity physical equipment, such as, can be Wearable, such as helmet-type, or earphone-type etc..
Fixed cell 301 is for being fixed on described device with user.Such as, the whole housing of the helmet is i.e. For fixed cell 301, the built-in earplug of non-headband receiver is fixed cell 301, and it can make institute State information output apparatus and be fixed on user, it is simple to user uses.
Collecting unit 302 is located towards the front after user is worn by equipment, is used for gathering and process described The environmental information of one environment.Collecting unit 302 in the embodiment of the present invention can be photographic head, camera, biography The equipment such as sensor, radar.Such as, collecting unit 302 can gather the image information of described first environment, Described image information is processed, it is thus achieved that the environmental information of described first environment.Or, collecting unit 302 Described first environment can be detected, be obtained the environmental information of described first environment by described detection.
Voice output unit 303 is positioned near the ear after user is worn by described device, is used for exporting information. In the embodiment of the present invention, voice output unit 303 can be the equipment such as loudspeaker.Voice output unit 303 is permissible Described first acoustic information and described second acoustic information is exported to user.
Processing unit 304 is for by the location parameter corresponding at least one object described in described first environment It is separately converted to the first audio parameter, it is also possible to for by least one object institute described in described first environment Corresponding vision parameter is separately converted to the second audio parameter.
Described device can also include memory element 305, and it may be used for storing vision parameter and audio parameter Correspondence set, it is also possible to storage corresponding object default tone color.
The information of collection towards the information in region, is converted into acoustic information by user by described device After, export to user by voice output unit 303, as ordinary people watches world around one by eyes Sample, described device can make user pass through the world that ear " is listened " to surrounding.
Wherein, the information output apparatus in the embodiment of the present invention can be with the information output apparatus shown in Fig. 2 Corresponding.Such as, in the embodiment of the present invention, the collecting unit 302 of information output apparatus can correspond to Fig. 2 The acquisition module 201 of middle information output apparatus, the i.e. collecting unit of information output apparatus in the embodiment of the present invention 302 can complete identical with the acquisition module 201 of information output apparatus in Fig. 2 or corresponding function;This In bright embodiment, the voice output unit 303 of information output apparatus can correspond to information output apparatus in Fig. 2 Output module 203, i.e. in the embodiment of the present invention, the voice output unit 303 of information output apparatus can be complete Become identical with the output module 203 of information output apparatus in Fig. 2 or corresponding function;In the embodiment of the present invention The processing unit 304 of information output apparatus can correspond to the processing module 202 of information output apparatus in Fig. 2, Or processing unit 304 and the memory element 305 of information output apparatus can correspond in the embodiment of the present invention The processing module 202 of information output apparatus in Fig. 2, i.e. the process of information output apparatus in the embodiment of the present invention Unit 304 can complete identical with the processing unit 202 of information output apparatus in Fig. 2 or corresponding function, Or processing unit 304 and the memory element 305 of information output apparatus can complete and scheme in the embodiment of the present invention In 2, the processing unit 202 of information output apparatus is identical or corresponding function.That is, in the embodiment of the present invention Information output apparatus can be equivalent to the entity apparatus of the information output apparatus shown in Fig. 2.
Information output method in the embodiment of the present invention is the environmental information obtaining first environment, described first ring Border includes that at least one object, at least one object described have corresponding location parameter respectively;By institute State described in first environment the location parameter corresponding at least one object and be separately converted to the first audio parameter; The first acoustic information is exported according to described first audio parameter.Location parameter corresponding for object in environment is converted It is that the first audio parameter exports again, so that blind person can be right by the position of acoustical signal perception surrounding objects Object positions, and preferably plays navigation function for blind person, provides a kind of Context aware matchmaker for user It is situated between.
The vision parameter of object can also be converted into the second audio parameter to export, thus not only make blind person Position or the movement locus of object can be known, moreover it is possible to know the information such as the shape of object, color, can definitely know Road specifically what object, thus show surrounding to blind person the most intuitively.Just as ordinary people can use Eyes see that All Around The World is the same, and blind person can be made " to listen " to world around, than common blind guide function The most powerful.
Those skilled in the art are it should be appreciated that embodiments of the invention can be provided as method, system or meter Calculation machine program product.Therefore, the present invention can use complete hardware embodiment, complete software implementation or knot The form of the embodiment in terms of conjunction software and hardware.And, the present invention can use and wherein wrap one or more Computer-usable storage medium containing computer usable program code (include but not limited to disk memory and Optical memory etc.) form of the upper computer program implemented.
The present invention is with reference to method, equipment (system) and computer program product according to embodiments of the present invention The flow chart of product and/or block diagram describe.It should be understood that can by computer program instructions flowchart and / or block diagram in each flow process and/or flow process in square frame and flow chart and/or block diagram and/ Or the combination of square frame.These computer program instructions can be provided to general purpose computer, special-purpose computer, embedding The processor of formula datatron or other programmable data processing device is to produce a machine so that by calculating The instruction that the processor of machine or other programmable data processing device performs produces for realizing at flow chart one The device of the function specified in individual flow process or multiple flow process and/or one square frame of block diagram or multiple square frame.
These computer program instructions may be alternatively stored in and computer or the process of other programmable datas can be guided to set In the standby computer-readable memory worked in a specific way so that be stored in this computer-readable memory Instruction produce and include the manufacture of command device, this command device realizes in one flow process or multiple of flow chart The function specified in flow process and/or one square frame of block diagram or multiple square frame.
These computer program instructions also can be loaded in computer or other programmable data processing device, makes Sequence of operations step must be performed to produce computer implemented place on computer or other programmable devices Reason, thus the instruction performed on computer or other programmable devices provides for realizing flow chart one The step of the function specified in flow process or multiple flow process and/or one square frame of block diagram or multiple square frame.
Obviously, those skilled in the art can carry out various change and modification without deviating from this to the present invention Bright spirit and scope.So, if the present invention these amendment and modification belong to the claims in the present invention and Within the scope of its equivalent technologies, then the present invention is also intended to comprise these change and modification.

Claims (15)

1. an information output method, it is characterised in that comprise the following steps:
Obtaining the environmental information of first environment, described first environment includes at least one object, described at least One object has corresponding location parameter and corresponding vision parameter respectively;
Location parameter corresponding at least one object described in described first environment is separately converted to first Audio parameter;
Vision parameter corresponding at least one object described in described first environment is separately converted to Two audio parameters;
According to described first audio parameter and described second audio parameter, export the first acoustic information and the rising tone Message ceases.
2. the method for claim 1, it is characterised in that described by the institute in described first environment State the vision parameter corresponding at least one object to be separately converted to the step of the second audio parameter and include:
Obtain described vision parameter corresponding at least one object;Described vision parameter at least includes vision face Color and/or visual shape;
The described vision parameter obtained is separately converted to the second audio parameter.
3. method as claimed in claim 2, it is characterised in that the described vision parameter difference that will obtain The step being converted into the second audio parameter includes:
Obtain the correspondence set of described vision parameter and described second audio parameter, described set of correspondences Conjunction includes the corresponding relation of vision parameter described at least one and described second audio parameter;
According to the corresponding relation included in the described vision parameter obtained and described correspondence set, determine Described second audio parameter.
4. the method for claim 1, it is characterised in that described first audio parameter at least includes Orientation, described second audio parameter at least includes one or more in frequency, volume and tone color.
5. the method for claim 1, it is characterised in that by described in described first environment extremely Few location parameter corresponding to an object further comprises the steps of: before being separately converted to the first audio parameter
The position of monitoring at least one object described, it is thus achieved that monitoring result;
Judge described monitoring result whether this is indicate that the position of at least one object described there occurs change;
When judged result is for being, obtain the location parameter after at least one object described updates respectively;
Location parameter corresponding at least one object described in described first environment is separately converted to first The step of audio parameter includes: by the position of the renewal corresponding at least one object described in described first environment Put parameter and be separately converted to the first audio parameter.
6. the method for claim 1, it is characterised in that obtain the environmental information of first environment Step includes:
Gather the image information of described first environment;
Described image information is processed, it is thus achieved that the environmental information of described first environment.
7. the method for claim 1, it is characterised in that obtain the environmental information of first environment Step includes:
Described first environment is detected;
The environmental information of described first environment is obtained by described detection.
8. an information output apparatus, it is characterised in that including:
Acquisition module, for obtaining the environmental information of first environment, described first environment includes at least one Object, at least one object described has corresponding location parameter and corresponding vision parameter respectively;
Processing module, for dividing the location parameter corresponding at least one object described in described first environment It is not converted into the first audio parameter, and for by right at least one object institute described in described first environment The vision parameter answered is separately converted to the second audio parameter;
Output module, for according to described first audio parameter and described second audio parameter, exports the first sound Message breath and the second acoustic information.
9. device as claimed in claim 8, it is characterised in that described acquisition module is additionally operable to obtain institute State the vision parameter corresponding at least one object;Described vision parameter at least includes visual color and/or vision Shape;Described processing module is additionally operable to the described vision parameter obtained is separately converted to the second audio parameter.
10. device as claimed in claim 9, it is characterised in that described processing module also includes:
First acquiring unit, for obtaining the set of correspondences of described vision parameter and described second audio parameter Closing, described correspondence set includes the right of vision parameter described at least one and described second audio parameter Should be related to;
Determine unit, included according to the described vision parameter obtained and described correspondence set Corresponding relation, determines described second audio parameter.
11. devices as claimed in claim 8, it is characterised in that described first audio parameter at least includes Orientation, described second audio parameter at least includes one or more in frequency, volume and tone color.
12. devices as claimed in claim 8, it is characterised in that described acquisition module also includes:
Monitoring means, for monitoring the position of at least one object described, it is thus achieved that monitoring result;
Judging unit, for judging described monitoring result whether this is indicate that the position of at least one object described occurs Change;
Second acquisition unit, for when judged result is for being, obtains at least one object described respectively and updates After location parameter;
Described processing module is additionally operable to the renewal corresponding at least one object described in described first environment Location parameter be separately converted to the first audio parameter.
13. devices as claimed in claim 8, it is characterised in that described acquisition module also includes:
Collecting unit, for gathering the image information of described first environment;
Second acquisition unit, for processing described image information, it is thus achieved that the environment of described first environment Information.
14. devices as claimed in claim 8, it is characterised in that described acquisition module also includes:
Monitoring means, for detecting described first environment;
Second acquisition unit, for obtaining the environmental information of described first environment by described detection.
15. 1 kinds of electronic equipments, it is characterised in that including:
Fixed cell, for being fixed on described electronic equipment with user;
Collecting unit, is located towards the front after user is worn by described electronic equipment, is used for gathering and processing The environmental information of first environment;
Processing unit, for by the location parameter corresponding at least one object in described first environment respectively It is converted into the first audio parameter, and for by corresponding at least one object described in described first environment Vision parameter be separately converted to the second audio parameter;
Voice output unit, is positioned near the ear after user is worn by described electronic equipment, is used for exporting letter Breath.
CN201210046679.9A 2012-02-24 2012-02-24 A kind of information output method, device and electronic equipment Active CN103294880B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210046679.9A CN103294880B (en) 2012-02-24 2012-02-24 A kind of information output method, device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210046679.9A CN103294880B (en) 2012-02-24 2012-02-24 A kind of information output method, device and electronic equipment

Publications (2)

Publication Number Publication Date
CN103294880A CN103294880A (en) 2013-09-11
CN103294880B true CN103294880B (en) 2016-12-14

Family

ID=49095737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210046679.9A Active CN103294880B (en) 2012-02-24 2012-02-24 A kind of information output method, device and electronic equipment

Country Status (1)

Country Link
CN (1) CN103294880B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107301773A (en) * 2017-06-16 2017-10-27 上海肇观电子科技有限公司 A kind of method and device to destination object prompt message

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2248956Y (en) * 1995-12-21 1997-03-05 山东师范大学 Hand holding type multifunction device for guiding blind person
CN2618235Y (en) * 2003-04-25 2004-05-26 赵舜培 Books with self-identifying content
CN101385677A (en) * 2008-10-16 2009-03-18 上海交通大学 Blind guiding method and device based on moving body track
CN201404415Y (en) * 2009-05-07 2010-02-17 蒋清晓 Ultrasonic short-range multi-directional environment reconstruction system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2248956Y (en) * 1995-12-21 1997-03-05 山东师范大学 Hand holding type multifunction device for guiding blind person
CN2618235Y (en) * 2003-04-25 2004-05-26 赵舜培 Books with self-identifying content
CN101385677A (en) * 2008-10-16 2009-03-18 上海交通大学 Blind guiding method and device based on moving body track
CN201404415Y (en) * 2009-05-07 2010-02-17 蒋清晓 Ultrasonic short-range multi-directional environment reconstruction system

Also Published As

Publication number Publication date
CN103294880A (en) 2013-09-11

Similar Documents

Publication Publication Date Title
CN110148294B (en) Road condition state determining method and device
JP5881263B2 (en) Display of sound status on wearable computer system
JP5821307B2 (en) Information processing apparatus, information processing method, and program
CN105026984B (en) Head mounted display
CN108156561B (en) Audio signal processing method and device and terminal
US20200251124A1 (en) Method and terminal for reconstructing speech signal, and computer storage medium
CN108028957A (en) Information processor, information processing method and program
CN105700676A (en) Wearable glasses, control method thereof, and vehicle control system
CN111723602B (en) Method, device, equipment and storage medium for identifying driver behavior
CN105892472A (en) Mobile Terminal And Method For Controlling The Same
CN110027567A (en) The driving condition of driver determines method, apparatus and storage medium
CN105814518B (en) A kind of information processing method and Intelligent bracelet
JP2004077277A (en) Visualization display method for sound source location and sound source location display apparatus
CN106412687A (en) Interception method and device of audio and video clips
CN106067833A (en) Mobile terminal and control method thereof
CN109978996B (en) Method, device, terminal and storage medium for generating expression three-dimensional model
CN106910322A (en) A kind of pre- myopia prevention device of wear-type based on stereoscopic vision and behavioural analysis
JP2024096996A (en) System and method for generating head-related transfer function
CN117334207A (en) Sound processing method and electronic equipment
CN106302974A (en) A kind of method of information processing and electronic equipment
CN103294880B (en) A kind of information output method, device and electronic equipment
CN106060707A (en) Reverberation processing method and device
CN110704204B (en) Navigation reminding method and device based on user perception
CN105277193B (en) Prompt information output method, apparatus and system
CN107665714A (en) Projector equipment noise cancellation method, device and projector equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant