WO2020056683A1 - 对数字相片添加气味资讯的添加系統及添加方法 - Google Patents

对数字相片添加气味资讯的添加系統及添加方法 Download PDF

Info

Publication number
WO2020056683A1
WO2020056683A1 PCT/CN2018/106757 CN2018106757W WO2020056683A1 WO 2020056683 A1 WO2020056683 A1 WO 2020056683A1 CN 2018106757 W CN2018106757 W CN 2018106757W WO 2020056683 A1 WO2020056683 A1 WO 2020056683A1
Authority
WO
WIPO (PCT)
Prior art keywords
odor
digital photo
release
odor information
electronic device
Prior art date
Application number
PCT/CN2018/106757
Other languages
English (en)
French (fr)
Inventor
张恒中
吕奇恩
Original Assignee
张恒中
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 张恒中 filed Critical 张恒中
Priority to PCT/CN2018/106757 priority Critical patent/WO2020056683A1/zh
Priority to CN201880097844.9A priority patent/CN112771471A/zh
Priority to EP18934032.6A priority patent/EP3855287A4/en
Publication of WO2020056683A1 publication Critical patent/WO2020056683A1/zh
Priority to US17/205,829 priority patent/US11914640B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61LMETHODS OR APPARATUS FOR STERILISING MATERIALS OR OBJECTS IN GENERAL; DISINFECTION, STERILISATION OR DEODORISATION OF AIR; CHEMICAL ASPECTS OF BANDAGES, DRESSINGS, ABSORBENT PADS OR SURGICAL ARTICLES; MATERIALS FOR BANDAGES, DRESSINGS, ABSORBENT PADS OR SURGICAL ARTICLES
    • A61L9/00Disinfection, sterilisation or deodorisation of air
    • A61L9/015Disinfection, sterilisation or deodorisation of air using gaseous or vaporous substances, e.g. ozone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5854Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using shape and object relationship
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
    • G06F16/972Access to data in other repository systems, e.g. legacy data or dynamic Web page generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61LMETHODS OR APPARATUS FOR STERILISING MATERIALS OR OBJECTS IN GENERAL; DISINFECTION, STERILISATION OR DEODORISATION OF AIR; CHEMICAL ASPECTS OF BANDAGES, DRESSINGS, ABSORBENT PADS OR SURGICAL ARTICLES; MATERIALS FOR BANDAGES, DRESSINGS, ABSORBENT PADS OR SURGICAL ARTICLES
    • A61L2209/00Aspects relating to disinfection, sterilisation or deodorisation of air
    • A61L2209/10Apparatus features
    • A61L2209/11Apparatus for controlling air treatment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Definitions

  • the invention relates to an adding system and an adding method, in particular to an adding system and an adding method for adding odor information to a digital photo.
  • the main object of the present invention is to provide an adding system and method for adding odor information to digital photos, which can make digital photos with added odor information, and store or send the digital photos.
  • the adding system of the present invention includes an application program disposed in an electronic device and a database storing a plurality of odor information.
  • the adding method includes the following steps: the electronic device reads and opens a digital photo through the application program; locates a pixel point on the digital photo that has been externally selected; and searches for a boundary outward from the pixel point to determine A mark range; obtaining the odor information to be added from the database; associating the mark range with the odor information; and storing or transmitting the digital photo of the mark completion.
  • the technical effect of the present invention is that one user can use the present invention to make a digital photo with added odor information, and another user can use another electronic device to open the digital photo and add the The odor information on the digital photo controls an odor release device to release the corresponding odor.
  • users can get both visual and olfactory experience on digital photos.
  • FIG. 1 is a first specific embodiment of a schematic diagram of an odor information adding system according to the present invention.
  • FIG. 2 is a first specific embodiment of the odor information adding flowchart of the present invention.
  • FIG. 3A is a first operation diagram of a first specific embodiment of a method for adding odor information according to the present invention.
  • FIG. 3B is a second operation diagram of the first specific embodiment of the method for adding odor information according to the present invention.
  • FIG. 3C is a third operation diagram of the first specific embodiment of the method for adding odor information according to the present invention.
  • FIG. 3D is a fourth operation diagram of the first specific embodiment of the method for adding scent information according to the present invention.
  • FIG. 3E is a fifth action diagram of the first specific embodiment of the method for adding odor information according to the present invention.
  • FIG. 3F is a sixth operation diagram of the first specific embodiment of the method for adding odor information according to the present invention.
  • FIG. 4 is a first embodiment of a schematic diagram of an odor formulation of the present invention.
  • FIG. 5 is a second specific embodiment of the odor information adding flowchart of the present invention.
  • FIG. 6 is a first specific embodiment of the odor release flowchart of the present invention.
  • FIG. 7A is a first operation diagram of the first specific embodiment of the odor releasing method of the present invention.
  • FIG. 7B is a second operation diagram of the first specific embodiment of the odor releasing method of the present invention.
  • FIG. 7C is a third operation diagram of the first specific embodiment of the odor releasing method of the present invention.
  • FIG. 8 is a second specific embodiment of the odor release flowchart of the present invention.
  • FIG. 9A is a first action diagram of a second specific embodiment of an odor release method of the present invention.
  • FIG. 9B is a second operation diagram of the second specific embodiment of the odor releasing method of the present invention.
  • FIG. 10 is a third specific embodiment of the odor release flowchart of the present invention.
  • FIG. 11 is a third specific embodiment of the odor releasing method of the present invention.
  • FIG. 12 is a fourth embodiment of the odor release flowchart of the present invention.
  • the invention discloses an adding system (hereinafter referred to as an adding system) capable of adding odor information to a digital photo.
  • the adding system mainly includes a marking program 11 installed and executed in an electronic device 1 (for example, a first electronic device). And a database 12 storing a plurality of preset odor information 121.
  • the database 12 is set in the electronic device 1.
  • the database 12 may be integrated with the marking program 11 or may be independent of the electronic device 1 without limitation.
  • the odor information 121 mainly records a corresponding component of an odor.
  • the first odor information 1210 corresponds to a special odor
  • the special odor is composed of 40%.
  • the first component 1211 for example, citrus
  • the second component 1212 for example, sugar
  • the third component 1213 for example, strawberry
  • the odor release device 2 such as the odor release device 2 shown in FIG. 1
  • the first component 1211, the second component 1212, and the third component 1213 can be prepared according to the foregoing proportion, thereby generating and releasing an odor corresponding to the first odor information 1210. In this way, the user can obtain an olfactory experience through the odor release device 2.
  • the marking program 11 is mainly installed and executed on the electronic device 1.
  • the electronic device 1 can open the digital photo to be edited (for example, the digital photo 4 shown in FIG. 3A) through the marking program 11.
  • FIG. 2 is a first specific embodiment of a flowchart for adding odor information according to the present invention.
  • the digital photo to be edited can be read and opened mainly through the marking program 11 in the electronic device 1 (step S10), and the opened digital photo is displayed On the display screen of the electronic device 1.
  • the electronic device 1 accepts an external pointing operation performed by a user with an operation interface such as a finger or a mouse, a stylus pen, etc., to select an arbitrary position (in a unit of one pixel) on the digital photo.
  • the marking program 11 locates the pixels on the digital photo that have been accepted by the above external selection (step S12), and uses the pixels as a starting point, and looks for the boundary from the pixels outward to determine a marking range (step S14).
  • the user may not be able to accurately select a valid range when performing the selection, and thus a problem of adding errors may occur. For example, if the user's finger is too large, it may accidentally click on the undesired but very close pixels.
  • the present invention adopts the above-mentioned edge-finding assisting method to overcome the problem of missing points. For another example, if the user's finger is too small, the pointing operation must be repeated multiple times when marking a large area.
  • the above-mentioned edge finding assisting method can also effectively avoid repeated pointing operations.
  • the selected pixels are used to find an effective boundary outward to expand a pixel into a valid marking range, thereby facilitating the marking program 11 to confirm on the digital photo that the user wants to add odor information. Correct position.
  • the labeling program 11 obtains one of the plurality of odor information 121 in the database 12 (step S16), and associates the label range with the acquired odor information (step S18).
  • the above-mentioned odor information 121 can be selected by the user from the database 12 by himself, automatically selected by the marking program 11 from the database 12, or can be automatically deployed by the user when performing the adding action immediately, etc. (more details later) .
  • the marking program 11 determines whether the marking operation by the user is completed.
  • the user can click a position (corresponding to a pixel point) on the digital photo to obtain a mark range, and add an odor information to the mark range.
  • the user can click multiple locations (corresponding to multiple pixels) on the digital photo to obtain multiple tag ranges, and add corresponding odor information to the tag ranges, respectively. In this way, the digital photo can be associated with a plurality of the same or different odor information at the same time.
  • the marking program 11 can further store or transmit the digital photos after the marking is completed (step S22). Specifically, after the digital photo marking is completed, the marking program 11 may store the digital photos in a portable storage device, or transmit the marked digital photos to another electronic device through communication software (such as Line, WhatsApp, WeChat, etc.). , Or upload and post tagged digital photos to social networking sites (such as Facebook, Instagram, Twitter, Snapchat, etc.).
  • communication software such as Line, WhatsApp, WeChat, etc.
  • social networking sites such as Facebook, Instagram, Twitter, Snapchat, etc.
  • another user may execute a release process 110 through another electronic device 1 (for example, the second electronic device) to receive and open the marked digital photo.
  • the release program 110 can control the odor release device 2 connected to the second electronic device to generate and release an odor corresponding to one or more odor information added to the digital photo.
  • the marking program 11 and the release program 110 refer to different functions in the same application program, but the marking program 11 and the release program 110 may also be two independent sets of applications, which are not limited.
  • the adding system of the present invention may further include a cloud database 3, and the cloud database 3 is connected to the electronic device 1 through a network.
  • the cloud database 3 can automatically configure various odor components through network search, administrator setting or machine learning, and generate corresponding multiple odor information accordingly.
  • the tagging program 11 can be connected to the cloud database 3 through the electronic device 1 to update the various odor information 121 in the database 12 periodically / irregularly.
  • the user can configure one or more scent information by himself, and store it in the database 12 or upload it to the cloud database 3.
  • the user can also adjust the components of the odor to be added to generate the above-mentioned odor information immediately when performing the adding action, and then add the generated odor information to the marked range.
  • the adding method of the present invention can obtain the pre-arranged odor information from the database 12 to perform the adding action, and can also perform the adding action according to the user's real-time odor information.
  • the first operation diagram to the sixth operation diagram of the first specific embodiment of the method for adding odor information according to the present invention are respectively.
  • the electronic device 1 can open the digital photo 4 to be edited by executing the marking program 11.
  • the user can perform an external click operation on the digital photo 4, and the marking program 11 locates the pixel point 41 that has been selected by the user to determine the position actually clicked by the user.
  • the labeling program 11 seeks out an effective boundary from the pixel point 41 to expand the pixel point 41 into a valid labeling range 42.
  • the marking program 11 can accept a user's operation to select one of the plurality of odor information 121 stored in the database 12 (selecting the odor information 1 as an example in FIG. 3E).
  • the marking program 11 can associate the selected marking range 42 with the selected odor information 121 and further store or Send tagged digital photos 4.
  • FIG. 5 is a second specific embodiment of a flowchart for adding odor information according to the present invention.
  • the electronic device 1 first executes the marking program 11, and reads and opens a digital photo through the marking program 11 (step S30).
  • the digital photo accepts the user's click, and the marking program 11 locates the selected pixels on the digital photo (step S32), and searches for a valid boundary from the pixel outward to determine a valid mark.
  • the range step S34.
  • the above steps S30 to S34 are the same as or similar to steps S10 to S14 of FIG. 2 described above, and are not repeated here.
  • the marking program 11 further performs image recognition on the marked range to identify the marked range as a marked object (step S36). Specifically, in step S36, after the marking program 11 determines a valid marking range, it is determined whether the marking range corresponds to a meaningful object through image recognition processing. For example, in the embodiment of FIG. 3D, after determining a valid marking range 42, the marking program 11 may further perform image recognition on the marking range 42 to identify objects corresponding to the marking range 42 (in FIG. 3D, the leaves As an example).
  • the marking program 11 queries the database 12 based on the identified marked objects (step S38), and determines whether or not the database 12 has odor information 121 corresponding to the marked objects (step S40). For example, if the marking program 11 determines that a marked object is an apple after image recognition, then in step S40, the marking program 11 determines whether the odor information 121 corresponding to the smell of the apple is stored in the database 12; if the marking program 11 After the image recognition determines that a marked object is grassland, in step S40, the marking program 11 determines whether the database 12 has odor information 121 corresponding to the smell of the grassland.
  • the above description is only one specific embodiment of the present invention, but is not limited thereto.
  • step S40 the marking program 11 determines that the odor information 121 corresponding to the marked object is stored in the database 12, the marking program 11 automatically obtains the odor information 121 corresponding to the marked object (step S42) and recommends it to the user (For example, displayed on the display screen of the electronic device 1 by a reminder window). Therefore, the user can choose to directly use the odor information 121 recommended by the labeling program 11, select other odor information 121 from the database 12, or arrange the required odor information 121 by himself.
  • the marking program 11 determines that the database 12 does not have the odor information 121 corresponding to the marked object, the marking program 11 further accepts external operations of the user to select one of a plurality of kinds of odor information 121 in the database 12 ( Step S44), or the user needs to configure the required odor information 121 by himself.
  • the first electronic device can also be connected to the odor release device 2.
  • the marking program 11 when the marking program 11 recommends an odor information 121, or the user selects an odor information 121 in the database 12, or the user arranges an odor information 121 by himself, the marking program 11 may The odor information 121 is directly transmitted to the odor release device 2 connected to the first electronic device, so as to control the odor release device 2 to release the odor corresponding to the odor information 121 for the user to test. In this way, the user can determine whether to select this scent information 121, or to reselect other odor information 121 in the database 12, or to re-arrange another odor information 121.
  • the marking program 11 further associates the identified marking object with the selected odor information 121 (step S46).
  • the marking program 11 determines whether the user has completed the marking operation of the digital photo (step S48). If the user has not finished marking, the marking program 11 executes steps S32 to S46 again, so that the user can continue to click the next pixel to mark the next marked object.
  • the digital photo that has been marked can be further stored or transmitted (step S50).
  • the user can store the digital photos that have been marked on the portable storage medium through the marking program 11.
  • the user can pass the marked digital photos to other users through communication software.
  • the user can upload and post the marked digital photo on a social networking site. Therefore, other users can use another electronic device (such as a second electronic device) to open the marked digital photo, so as to obtain the visual and olfactory dual experience on the digital photo at the same time.
  • FIG. 6 is a first specific embodiment of the odor release flowchart of the present invention
  • FIGS. 7A to 7C are first of the first specific embodiment of the odor release method of the present invention.
  • the user When using the marked digital photo 4, the user first operates another electronic device 1 (for example, the second electronic device), and executes a release program 110 installed in the electronic device 1 to receive and start the release program 110.
  • the marked digital photo 4 (step S60).
  • the electronic device 1 is connected to the odor release device 2 by a wired method (such as a transmission line) or a wireless method (such as a Wi-Fi connection or a Bluetooth connection).
  • the user may connect the portable storage medium storing the digital photo 4 to the electronic device 1 to cause the release program 110 to read and open the digital photo 4 stored in the portable storage medium.
  • the user may connect to a social networking site (such as Facebook) through the electronic device 1 and open the digital photo 4 posted on the social networking site by other users through the release process 110.
  • a social networking site such as Facebook
  • the digital photo 4 that is turned on can be displayed through the display screen 13 on the electronic device 1, and it is worth mentioning that the digital photo 4 can be displayed on the display screen 13 at the same time.
  • One or more marked ranges 42 are marked. In this way, not only can the user be reminded that the digital photo 4 currently opened is the digital photo 4 with added odor information, and when the user wants to touch any of the marked ranges 42 on the digital photo 4, no touch will occur. Wrong question.
  • the release program 110 accepts an external operation of the user through the display screen 13 of the electronic device 1 to select any one of the marked ranges 42 on the digital photo 4 and obtains the odor information 121 added to the marked ranges 42 (step S62). .
  • the user selects one of the marker ranges 42 in the digital photo 4 through operation, and the release program 110 further obtains the odor information 121 (here, the odor information 1 is added to the marker range 42). As an example).
  • the release program 110 transmits the acquired odor information 121 to the odor release device 2 connected to the electronic device 1 (Step S64). Therefore, the odor release device 2 can receive the odor information 121 transmitted by the electronic device 1, and generate and release a corresponding odor according to the received odor information 121 (Step S66).
  • the odor release device 2 directly releases the odor corresponding to the received odor information 121.
  • the odor releasing device 2 is configured by generating a plurality of different components to generate the odor corresponding to the received odor information 121, and then releasing the generated odor.
  • the user may press a release button immediately after selecting a marking range 42, so that the odor release device 2 immediately releases the corresponding odor.
  • the release program 110 may accept multiple external operations by the user to select multiple mark ranges 42 on the digital photo 4 and obtain multiple odor information 121 added to each mark range 42.
  • the release program 110 can further determine whether the user chooses to complete after step S62, and repeat step S62 before the user has not yet selected to complete, so that the user can then select the next mark range 42. Therefore, when the user presses the release button, the release program 110 can transmit the acquired multiple odor information 121 to the odor release device 2 at the same time, and the odor release device 2 can mix and release multiple odor information according to the received multiple odor information 121.
  • Kind of smell Through the above technical means, the present invention can make the smell smelled by the user closer to the actual smell felt at the moment when the digital photo 4 is taken.
  • the release program 110 selects one or more mark ranges 42 directly on the digital photo 4 through a user operation, and provides corresponding odor information 121 to the odor release device 2 to release the odor.
  • the device 2 emits a corresponding odor.
  • the user can also trigger the release program 110 to perform the above-mentioned actions in other ways.
  • FIG. 8 is a second specific embodiment of the odor release flowchart of the present invention
  • FIGS. 9A to 9B are first of the second specific embodiment of the odor release method of the present invention.
  • FIG. 8 is used to further explain step S62 in FIG. 6.
  • the release program 110 After the digital photo 4 is turned on through step S60 in FIG. 6 described above, the release program 110 first accepts an external operation of a user 5 through the display screen 13 of the electronic device 1 to select any one of the marker ranges 42 on the digital photo 4 ( Step S70). Next, the release program 110 determines whether the selection by the user 5 is completed (step S72). In other words, in this embodiment, the user 5 can select a single mark range 42 on the digital photo 4 (that is, the odor release device 2 will only release a single odor), or select multiple on the digital photo 4. Marking range 42 (ie, the odor release device 2 will mix and release multiple odors).
  • the release program 110 obtains the odor information added to the selected mark range 42 (step S74). Next, the release program 110 performs a face recognition operation on the user 5 through the sensor 14 on the electronic device 1 to recognize the nose of the user 5 (step S76). Next, the release program 110 continues to determine whether the nose of the user 5 is close to the electronic device 1 (step S78). If the nose of the user 5 is not close to the electronic device 1 (for example, the distance from the electronic device 1 is not less than a threshold distance), the release program 110 does not perform subsequent actions temporarily, but continuously detects the nose of the user 5.
  • step S64 shown in FIG. 6 is performed.
  • step S66 shown in FIG. 6 is further performed, that is, the corresponding odor is generated and released by the odor release device 2 according to the received odor information 121.
  • the release program 110 allows the user 5 to select the marked range 42 of interest from the digital photo 4 and controls the odor release device 2 to perform the odor release action when the user 5's nose is close. Therefore, the behavior of the user 5 approaching the solid object and smelling the solid object can be simulated, thereby making the present invention more interesting.
  • the odor release device 2 may be, for example, a back cover of the electronic device 1 (for example, the electronic device 1 is a smart phone) or integrated with the electronic device 1 as a whole.
  • the location at which the odor release device 2 releases odor is quite close to the display screen 13 of the electronic device 1, so when the user 5's nose is close to the electronic device 1, the user 5 can clearly feel the odor release device 2 Odor released.
  • the sensor 14 of the electronic device 1 may be, for example, a proximity sensor, a flood light illuminator, a mapping point projector, and an infrared lens.
  • the electronic device 1 may control the proximity sensor on the electronic device 1 to detect whether an object is approaching, and activate the flood light irradiator when the object approaches.
  • the flood light illuminator After the flood light illuminator is activated, it emits unstructured light to detect the contour of the face, and the electronic device 1 receives the reflected light from the infrared lens to obtain image information. Therefore, the electronic device 1 can determine whether the object is a human face from the acquired image information.
  • the electronic device 1 determines that the object is not a human face, the electronic device 1 no longer operates.
  • the electronic device 1 determines that the object is a human face, the electronic device 1 further controls the mapping point projector on the electronic device 1 to project a plurality of infrared light points (for example, 30,000) onto the human face, and receives the reflected infrared light through the infrared lens. To get face information. Then, the electronic device 1 can identify the nose position of the user 5 from the obtained face information, and then determine whether the nose of the user 5 is close to the electronic device 1. Therefore, the release program 110 can perform subsequent actions after determining that the nose of the user 5 is close to the electronic device 1 (and the distance from the electronic device 1 is less than the threshold distance).
  • the mapping point projector on the electronic device 1 to project a plurality of infrared light points (for example, 30,000) onto the human face, and receives the reflected infrared light through the infrared lens.
  • face information the electronic device 1 can identify the nose position of the user 5 from the obtained face information, and then determine whether the nose of the user 5 is close to
  • FIG. 10 is a third specific embodiment of the odor release flowchart of the present invention
  • FIG. 11 is a third specific embodiment of the odor release method of the present invention.
  • FIG. 10 is used to further explain step S62 in FIG. 6.
  • the odor release device 2 may be a virtual reality (VR) / Augmented Reality (AR) helmet or glasses, and the electronic device 1 is disposed on the odor Inside the release device 2 or integrated with the odor release device 2.
  • VR virtual reality
  • AR Augmented Reality
  • the user 5 cannot directly touch the display screen 13 of the electronic device 1 with a finger or other input interface. Therefore, after the electronic device 1 opens the digital photo 4 through the release program 110 (that is, as shown in FIG. Step S60 shown in FIG. 6), the release program 110 senses the gesture of the user 5 through the sensor 14 on the electronic device 1 (step S80), and determines whether the gesture of the user 5 is sensed Corresponds to any one of the marked ranges 42 on the digital photo 4 (step S82).
  • the different marking ranges 42 are respectively corresponding to different gesture actions (for example, gestures such as scissors, rock, and cloth are compared with the user's hand).
  • the release program 110 senses the gesture action of the user 5 in step S80, it can determine whether the sensed gesture action corresponds to any of the marked ranges 42.
  • the release program 110 can simultaneously sense the position of the hand of the user 5 when the digital photo 4 is opened and displayed on the display screen 13 And determine whether the position of the hand can correspond to the display position of any marker range 42 on the digital photo 4 (for example, the user's index finger points to the display position corresponding to any marker range 42).
  • step S82 determines in step S82 that the gesture of the user 5 cannot correspond to any marked range 42
  • the process returns to step S80 to continuously sense the gesture of the user 5 through the sensor 14 of the electronic device 1.
  • step S84 determines that the gesture of the user 5 can correspond to a mark range 42
  • step S64 shown in FIG. 6 is executed to transfer the acquired odor information 121 to the odor release device 2 by the release program 110.
  • step S66 shown in FIG. 6 is further performed, that is, the corresponding odor is generated and released by the odor release device 2 according to the received odor information 121.
  • the odor release device 2 ie, VR / AR helmet or glasses
  • the electronic device 1 can also implement the aforementioned odor release action through eye tracking and voice control.
  • the release program 110 can track the eyeball position of the user 5 through another sensor (such as a camera lens) on the electronic device 1 (step S90). ), To determine which marker range 42 on the digital photo 4 the user's eyeball focuses on.
  • another sensor such as a camera lens
  • the user 5 may further issue a corresponding release instruction by voice, or make a corresponding gesture by hand.
  • the release program 110 may sense and receive a release instruction or a gesture made by the user 5 by using a sensor (such as the above-mentioned sensor 14), and determine whether the release instruction or the gesture is Correct (step S94).
  • the release program 110 determines that the release instruction or the gesture is correct, the mark range 42 focused by the eyeball of the user 5 may be selected, and the odor information 121 added to the mark range 42 is obtained (step S96). Finally, the release program 110 is followed by step S64 shown in FIG. 6 to transmit the acquired odor information 121 to the odor release device 2 and control the odor release device 2 to release the corresponding odor.
  • users can easily make digital photos with added odor information and share them with other users.
  • other users can display digital photos through electronic devices, and release odors added to digital photos through odor release devices, thereby simultaneously obtaining a dual experience of vision and smell.

Abstract

一种对数字相片(4)添加气味信息(121)的添加系统,包括设置于电子装置(1)内的应用程序及储存有多种气味信息(121)的数据库(12)。添加方法包括下列步骤:电子装置(1)通过应用程序读取并开启数字相片(4);对数字相片(4)上接受了外部点选的像素点(41)进行定位;由像素点(41)向外寻找边界以确定标记范围(42);由数据库(12)中取得要添加的气味信息(121);将标记范围(42)与气味信息(121)进行关联;及,储存或传送标记完成的数字相片(4)。通过所述添加系统与添加方法,其他使用者(5)可使用其他电子装置(1)开启标记完成的数字相片(4),并依据添加在数字相片(4)上的气味信息(121)控制气味释放装置(2)释放对应的气味。

Description

对数字相片添加气味资讯的添加系統及添加方法 技术领域
本发明涉及一种添加系统及添加方法,尤其涉及一种用来对数字相片添加气味信息的添加系统及添加方法。
背景技术
一般大众通常会通过照片将所看到的美景保留下来,然而,可惜的是照片只能为使用者保留视觉上的体验,但无法保留嗅觉上的体验。
近年来,有部分业者推出了特殊的打印机及特殊的油墨,令使用者可以打印出带有特殊气味的实体相片。然而,此类实体相片是通过所述特殊油墨来持续性地散发气味,使用者无法控制气味的散发时间与散发区域。如此一来,此类实体相片不但可能造成使用者的不便,且气味的留存时间也相当短暂。
另一方面,随着网络的迅速发展,近年来一般大众已经很少打印并收藏实体照片,取而代之的是储存并使用数字相片。因此,市场上实应提供一种新颖的技术,令使用者可随心所欲地在数字相片上添加对应的气味,以進一步优化数字相片所带给使用者的体验。
发明内容
本发明的主要目的,在于提供一种对数字相片添加气味信息的添加系统及添加方法,可制作添加了气味信息的数字相片,并加以储存或对外发送。
为了实现上述的目的,本发明的所述添加系统包括设置于一电子装置内的一应用程序及储存有多种气味信息的一数据库。所述添加方法包括下列步骤:该电子装置通过该应用程序读取并开启一数字相片;对该数字相片上接受了外部点选的一像素点进行定位;由该像素点向外寻找边界以确定一标记范围;由该数据库中取得要添加的气味信息;将该标记范围与该气味信息进行关联;及,储存或传送标记完成的该数字相片。
本发明相对于相关技术所能达到的技术功效在于,一使用者可以通过本发明来制作添加了气味信息的数字相片,而另一使用者可使用另一电子装置开启数字相片,并依据添加在数字相片上的气味信息控制一气味释放装置释放对应的气味。因此,使用者可于数字相片上同时得到视觉及嗅觉的双重体验。
附图说明
图1为本发明的气味信息添加系统示意图的第一具体实施例。
图2为本发明的气味信息添加流程图的第一具体实施例。
图3A为本发明的气味信息添加方法的第一具体实施例的第一动作图。
图3B为本发明的气味信息添加方法的第一具体实施例的第二动作图。
图3C为本发明的气味信息添加方法的第一具体实施例的第三动作图。
图3D为本发明的气味信息添加方法的第一具体实施例的第四动作图。
图3E为本发明的气味信息添加方法的第一具体实施例的第五动作图。
图3F为本发明的气味信息添加方法的第一具体实施例的第六动作图。
图4为本发明的气味配方示意图的第一具体实施例。
图5为本发明的气味信息添加流程图的第二具体实施例。
图6为本发明的气味释放流程图的第一具体实施例。
图7A为本发明的气味释放方法的第一具体实施例的第一动作图。
图7B为本发明的气味释放方法的第一具体实施例的第二动作图。
图7C为本发明的气味释放方法的第一具体实施例的第三动作图。
图8为本发明的气味释放流程图的第二具体实施例。
图9A为本发明的气味释放方法的第二具体实施例的第一动作图。
图9B为本发明的气味释放方法的第二具体实施例的第二动作图。
图10为本发明的气味释放流程图的第三具体实施例。
图11为本发明的气味释放方法的第三具体实施例。
图12为本发明的气味释放流程图的第四具体实施例。
附图标记说明:
1…电子装置
11…标记程序
110…释放程序
12…数据库
121…气味信息
1210…第一气味信息
1211…第一成分
1212…第二成分
1213…第三成分
13…显示屏幕
14…感测器
2…气味释放装置
3…云端数据库
4…数字相片
41…像素点
42…标记范围
5…使用者
S10~S22、S30~S50…添加步骤
S60~S66、S70~S78、S80~S84、S90~S96…释放步骤。
具体实施方式
兹就本发明的一较佳实施例,配合附图,详细说明如后。
参阅图1,为本发明的气味信息添加系统示意图的第一具体实施例。本发明公开了一种可对数字相片添加气味信息的添加系统(下面简称为添加系统),所述添加系统主要包括安装并执行于电子装置1(例如为第一电子装置)内的标记程序11,以及储存多种预先设定的气味信息121的数据库12。于图1的实施例中,所述数据库12是设置于电子装置1中的。于其他实施例中,所述数据库12可与标记程序11整合为一体,亦可独立于电子装置1之外,不加以限定。
请同时参阅图4,为本发明的气味配方示意图的第一具体实施例。本发明中,所述气味信息121主要是记录一种气味的对应成分,例如图4所示,第一气味信息1210即对应至一种特殊气味,并且所述特殊气味是由占比40%的第一成分1211(例如柑橘)、占比30%的第二成分1212(例如糖)及占比30%的第三成分1213(例如草莓)所组成。
若气味释放装置(如图1所示的气味释放装置2)内配置有上述第一成分1211、第二成分1212及第三成分1213,则当气味释放装置2接收所述第一气味信息1210后,即可依据上述占比调配所述第一成分1211、第二成分1212及第三成分1213,藉此产生并释放第一气味信息1210所对应的气味。如此一来,使用者可通过气味释放装置2来得到嗅觉上的体验。
回到图1,所述标记程序11主要是安装并执行于电子装置1,所述电子装置1可通过 标记程序11来开启要进行编辑的数字相片(例如图3A所示的数字相片4)。
请同时参阅图2,为本发明的气味信息添加流程图的第一具体实施例。当使用者欲在一张数字相片上添加气味信息时,主要可通过电子装置1中的标记程序11来读取并开启要进行编辑的数字相片(步骤S10),并且将被开启的数字相片显示于电子装置1的显示屏幕上。
接着,电子装置1接受使用者以手指或是鼠标、触控笔等操作界面所进行的外部点选动作,以选取数字相片上的任意位置(以一个像素点为单位)。标记程序11对数字相片上接受了上述外部点选的像素点进行定位(步骤S12),并且将所述像素点做为一个起始点,由像素点向外寻找边界,以确定一个标记范围(步骤S14)。
具体地,使用者在进行点选时可能无法准确地点选一个有效的范围,如此将可能产生添加错误的问题。举例来说,若使用者的手指太大,可能会误点不想要但是很靠近的像素点,本发明采用了上述的寻边辅助方式,即可克服误点的问题。再例如,若使用者的手指太小,则要标记一个较大的范围时必须重复进行多次的点选动作,通过上述寻边辅助方式,也可有效避免重复点选的动作。有鉴于此,本实施例由被点选的像素点向外寻找有效边界,以将一个像素点扩展成一个有效的标记范围,藉此利于标记程序11于数字相片上确认使用者要添加气味信息的正确位置。
步骤S14后,标记程序11于数据库12中取得多个气味信息121的其中之一(步骤S16),并且将所述标记范围与所取得的气味信息进行关联(步骤S18)。其中,上述气味信息121可由使用者从数据库12中自行选取、由标记程序11从数据库12中自动选取、或由使用者即时于进行添加动作时自行调配等,不加以限定(容后详述)。接着,标记程序11判断使用者的标记动作是否完成。
于一实施例中,使用者可以于数字相片上点选一个位置(对应一个像素点)以取得一个标记范围,并且为这个标记范围添加一种气味信息。于另一实施例中,使用者可于数字相片上点选多个位置(对应至多个像素点)以取得多个标记范围,并且为该些标记范围分别添加对应的气味信息。如此一来,所述数字相片可同时与多种相同或相异的气味信息产生关联。
于使用者已确实标记完成后,标记程序11即可进一步储存或传送标记完成的数字相片(步骤S22)。具体地,于数字相片标记完成后,标记程序11可将数字相片储存于便携式储存装置中,或是通过通信软件(例如Line,WhatsApp,WeChat等)将标记完成的数字相片传送至另一电子装置,或是将标记完成的数字相片上传并张贴于社群网站(例如Facebook, Instagram,Twitter,Snapchat等)。
如上所述,另一使用者可通过另一电子装置1(例如第二电子装置)来执行一个释放程序110,以接收并开启已标记完成的数字相片。并且,释放程序110可控制与第二电子装置连接的气味释放装置2产生并释放添加在数字相片上的一或多种气味信息所对应的气味。本发明中,所述标记程序11与释放程序110指的是相同应用程序中的不同功能,但标记程序11与释放程序110也可为各自独立的两套应用程序,不加以限定。
值得一提的是,随着时间、地点的改变,气味也可能会随之改变。本发明的添加系统可进一步包括云端数据库3,所述云端数据库3通过网络连接电子装置1。所述云端数据库3可经由网络搜寻、管理者设定或是机器学习来自动调配各种气味的组成成分,并且据以产生对应的多种气味信息。本发明中,所述标记程序11可通过电子装置1连接云端数据库3,以定期/不定期更新数据库12中的多种气味信息121。
于另一实施例中,使用者可自行调配一种或多种气味信息,并且储存于数据库12中或是上传至云端数据库3。再者,使用者亦可即时于进行添加动作时,自行调配要添加的气味的组成成分以产生上述气味信息,再将所产生的气味信息添加至所述标记范围。换句话说,本发明的添加方法可从数据库12中取得预先调配完成的气味信息来执行添加动作,也可依据使用者即时调配的气味信息来执行添加动作。
续请参阅图3A至图3F,分别为本发明的气味信息添加方法的第一具体实施例的第一动作图至第六添加动作图。首先如图3A所示,电子装置1可通过标记程序11的执行来开启要编辑的数字相片4。接着如图3B所示,使用者可对数字相片4进行外部点选动作,并且标记程序11对接受了使用者点选的像素点41进行定位,以确定使用者实际点选的位置。
接着,如图3C及图3D所示,标记程序11由所述像素点41向外寻找有效边界,以将像素点41扩张成一个有效的标记范围42。接着,如图3E所示,标记程序11可接受使用者的操作,以于数据库12储存的多种气味信息121中选择其中一种气味信息121(图3E中以选择气味信息1为例)。并且再如图3F所示,当使用者按下电子装置1上所显示的确认按键后,标记程序11即可将所选取的标记范围42与所选择的气味信息121进行关联,并且进一步储存或传送标记完成的数字相片4。
续请参阅图5,为本发明的气味信息添加流程图的第二具体实施例。本实施例中,电子装置1(例如第一电子装置)首先执行标记程序11,并通过标记程序11读取并开启一张数字相片(步骤S30)。接着,数字相片接受使用者的点选,并且标记程序11则对数字相片上 接受了点选的像素点进行定位(步骤S32),并且由像素点向外寻找有效边界,以确定一个有效的标记范围(步骤S34)。上述步骤S30至步骤S34系与前述图2的步骤S10至步骤S14相同或相似,于此不再赘述。
步骤S34后,标记程序11进一步对标记范围进行图像辨识,以识别所述标记范围为一个标记物件(步骤S36)。具体地,步骤S36是在标记程序11确定了一个有效的标记范围后,经由图像辨识处理,判断该标记范围是否对应至一个有意义的物件。例如,于图3D的实施例中,标记程序11可以在确定了一个有效的标记范围42后,进一步对标记范围42进行图像辨识,以识别出标记范围42所对应的物件(图3D中以树叶为例)。
接着,标记程序11依据识别所得的标记物件查询数据库12(步骤S38),并且判断数据库12中是否储存有可对应至标记物件的气味信息121(步骤S40)。举例来说,若标记程序11经图像辨识后判断一个标记物件为苹果,则于步骤S40中,标记程序11是判断数据库12中是否储存有对应至苹果的气味的气味信息121;若标记程序11经图像辨识后判断一个标记物件为草地,则于步骤S40中,标记程序11是判断数据库12中是否储存有对应至草地的气味的气味信息121。而,上述说明仅为本发明的其中一个具体实施例,但不以此为限。
若于步骤S40中,标记程序11判断数据库12中储存有可对应至标记物件的气味信息121,则标记程序11自动取得可对应至标记物件的气味信息121(步骤S42),并且推荐给使用者(例如以提醒视窗显示于电子装置1的显示屏幕上)。因此,使用者可以选择要直接使用标记程序11所推荐的气味信息121、自行于数据库12中选取其他的气味信息121、或是自行调配所需的气味信息121。
反之,若标记程序11判断数据库12中不具有可对应至标记物件的气味信息121,则标记程序11进一步接受使用者的外部操作,以于数据库12中选择多种气味信息121的其中之一(步骤S44),或是由使用者自行调配所需的气味信息121。
值得一提的是,于一实施例中,所述第一电子装置也可连接所述气味释放装置2。于本实施例中,当标记程序11推荐了一个气味信息121,或是使用者于数据库12中选择了一个气味信息121,或是使用者自行调配了一个气味信息121后,标记程序11可将这个气味信息121直接传递至与第一电子装置连接的气味释放装置2,以控制气味释放装置2释放这个气味信息121所对应的气味给使用者试闻。如此一来,使用者可以判断是否要选择这个气味信息121,或是要于数据库12中重新选择其他的气味信息121,或者是再重新调配另一个气味信息121。
步骤S42与步骤S44后,标记程序11进一步将所识别的标记物件与所选取的气味信息121进行关联(步骤S46)。接着,标记程序11判断使用者是否已经完成数字相片的标记动作(步骤S48)。若使用者尚未标记完成,则标记程序11再次执行步骤S32至步骤S46,以令使用者继续点选下一个像素点以标记下一个标记物件。
若标记程序11判断使用者已经标记完成,则可进一步储存或传送已标记完成的数字相片(步骤S50)。
如前文所述,于一实施例中,使用者可通过标记程序11将已标记完成的数字相片储存于便携式的储存媒体。于另一实施例中,使用者可通过通信软件将已标记完成的数字相片传递给其他使用者。于又一实施例中,使用者可将已标记完成的数字相片上传并张贴于社群网站。因此,其他使用者可以使用另一电子装置(例如第二电子装置)来开启已标记完成的数字相片,以在数字相片上同时得到视觉与嗅觉的双重体验。
请同时参阅图6及图7A至图7C,图6为本发明的气味释放流程图的第一具体实施例,图7A至图7C为本发明的气味释放方法的第一具体实施例的第一动作图至第三动作图。
于使用已标记完成的数字相片4时,首先由使用者操作另一电子装置1(例如第二电子装置),并执行安装在电子装置1内的释放程序110,以通过释放程序110接收并开启所述已标记完成的数字相片4(步骤S60)。本实施例中,所述电子装置1通过有线方式(例如传输线)或无线方式(例如Wi-Fi连接、蓝牙连接等)连接气味释放装置2。
举例来说,使用者可将储存有数字相片4的便携式储存媒体连接至电子装置1,以令释放程序110读取并开启储存在便携式储存媒体中的数字相片4。再例如,使用者可通过电子装置1连接社群网站(例如Facebook),并且通过释放程序110开启由其他使用者张贴在社群网站上的数字相片4。
当释放程序110开启所述数字相片4后,可通过电子装置1上的显示屏幕13来显示被开启的数字相片4,并且值得一提的是,释放程序110可同时于显示屏幕13上显示已被标记的一或多个标记范围42。如此一来,不但可提醒使用者目前开启的数字相片4是有添加气味信息的数字相片4,且当使用者要触碰数字相片4上的任一个标记范围42时,也不会发生触碰错误的问题。
接着,释放程序110通过电子装置1的显示屏幕13接受使用者的外部操作,以于数字相片4上选择任意一个标记范围42,并取得被添加在标记范围42上的气味信息121(步骤S62)。例如于图7B的实施例中,使用者经由操作选择了数字相片4中的其中一个标记范围42,并且释放程序110进一步取得被添加在这个标记范围42上的气味信息121(此处 以气味信息1为例)。
若使用者确定要释放这个气味信息121所对应的气味(例如按压图7C所示的释放按键),则释放程序110将所取得的气味信息121传送至与电子装置1连接的气味释放装置2(步骤S64),因此,气味释放装置2可接收电子装置1所传送的气味信息121,并依据所接收的气味信息121产生并释放对应的气味(步骤S66)。
于一实施例中,气味释放装置2是直接释放所接收的气味信息121所对应的气味。于另一实施例中,气味释放装置2是经由调配多种不同的成分以产生所接收的气味信息121所对应的气味,接着再释放所产生的气味。
于上述实施例中,使用者可以在选择了一个标记范围42后,随即按下释放按键,以令气味释放装置2立即释放对应的气味。
于另一实施例中,释放程序110可以接受使用者的多次外部操作,以于数字相片4上选择多个标记范围42,并且取得分别添加在各个标记范围42上的多种气味信息121。换句话说,释放程序110可以在步骤S62后进一步判断使用者是否选择完成,并且于使用者尚未选择完成之前重复执行步骤S62,以令使用者接着选择下一个标记范围42。因此,当使用者按下释放按键时,释放程序110可同时传送所取得的多种气味信息121至气味释放装置2,而气味释放装置2可依据所接收的多种气味信息121来混合释放多种气味。通过上述技术手段,本发明可令使用者所闻到的气味更贴近于拍摄数字相片4当下所感觉到的实际气味。
于图6的实施例中,释放程序110是通过使用者的操作来直接于数字相片4上选择一或多个标记范围42,并提供对应的气味信息121给气味释放装置2,以令气味释放装置2释放对应的气味。而,于本发明中,使用者还可通过其他方式来触发释放程序110执行上述动作。
请同时参阅图8及图9A至图9B,图8为本发明的气味释放流程图的第二具体实施例,图9A至图9B为本发明的气味释放方法的第二具体实施例的第一动作图至第二动作图。图8用以对图6的步骤S62做更进一步的说明。
经由前述图6的步骤S60开启了所述数字相片4后,释放程序110首先通过电子装置1的显示屏幕13接受一个使用者5的外部操作,以于数字相片4上选择任意一个标记范围42(步骤S70)。接着,释放程序110判断使用者5是否选择完毕(步骤S72)。换句话说,于本实施例中,使用者5可于数字相片4上选择单一个标记范围42(即,气味释放装置2将只会释放单一气味),也可以于数字相片4上选择多个标记范围42(即,气味释放装置2将 会混合释放多种气味)。
当使用者5选择完毕后(例如按压了上述释放按键),释放程序110取得被添加在所选择的标记范围42上的气味信息(步骤S74)。接着,释放程序110通过电子装置1上的感测器14对使用者5执行人脸辨识动作,以识别使用者5的鼻子(步骤S76)。接着,释放程序110持续判断使用者5的鼻子是否靠近电子装置1(步骤S78)。若使用者5的鼻子未靠近电子装置1(例如与电子装置1间的距离未小于一个门槛距离),则释放程序110暂不执行后续动作,而是持续侦测使用者5的鼻子。
当释放程序110判断使用者5的鼻子靠近电子装置1(例如,靠近电子装置1的显示屏幕13,且与显示屏幕13间的距离小于门槛距离)时,再接着执行图6所示的步骤S64,以将添加在所选择的标记范围42上的气味信息121传递至气味释放装置2。最后,再进一步执行图6所示的步骤S66,即由气味释放装置2依据所接收的气味信息121产生并释放对应的气味。
于上述实施例中,释放程序110令使用者5先从数字相片4上选择有兴趣的标记范围42,并且于使用者5的鼻子靠近时,再控制气味释放装置2进行气味的释放动作。因此,可以模拟使用者5靠近实体物件并闻该实体物件的味道的行为,而令本发明更具有趣味性。
如图9A及图9B所示,本实施例中,所述气味释放装置2可例如为电子装置1的背盖(例如电子装置1为智能手机),或是与电子装置1整合为一体。于本实施例中,气味释放装置2释放气味的位置相当接近电子装置1的显示屏幕13,因此当使用者5的鼻子靠近电子装置1时,使用者5将可以清楚地感受到气味释放装置2所释放出的气味。
于上述实施例中,电子装置1的感测器14可例如为接近感测器、泛光照射器、测绘点投射器及红外线镜头。具体地,于步骤S74后,电子装置1可控制其上的接近感测器感测是否有物体接近,并且于物体接近时启动泛光照射器。泛光照射器被启动后,会对外发射非结构光以侦测脸部轮廓,并且电子装置1由红外线镜头接收反射的光线以取得影像信息。因此,电子装置1可通过所取得的影像信息判断该物体是否为人脸。
若电子装置1判断该物体不是人脸,则电子装置1不再动作。
若电子装置1判断该物体是人脸,则电子装置1进一步控制其上的测绘点投射器对外投射多个红外线光点(例如三万个)至人脸上,并通过红外线镜头接收反射的红外线,藉此取得人脸信息。接着,电子装置1可由所取得的人脸信息中识别出使用者5的鼻子位置,进而判断使用者5的鼻子是否靠近电子装置1。因此,释放程序110可以在判断使用者5 的鼻子靠近电子装置1后(且与电子装置1间的距离小于门槛距离),执行后续的动作。
续请同时参阅图10及图11,图10为本发明的气味释放流程图的第三具体实施例,图11为本发明的气味释放方法的第三具体实施例。图10用以对图6的步骤S62做更进一步的说明。
如图11所示,本实施例中,所述气味释放装置2可为虚拟现实(Virtual Reality,VR)/扩增现实(Augmented Reality,AR)的头盔或眼镜,所述电子装置1设置于气味释放装置2内,或是与气味释放装置2整合为一体。
于本实施例中,使用者5无法直接以手指或其他输入界面直接触碰电子装置1的显示屏幕13,因此在电子装置1通过释放程序110开启了数字相片4后(即,执行了如图6所示的步骤S60),释放程序110即通过电子装置1上的感测器14感测使用者5的手势(步骤S80),并且于感测到使用者5的手势时,判断这个手势是否对应至数字相片4上的任意一个标记范围42(步骤S82)。
举例来说,使用者5可于执行前述标记动作时,将不同的标记范围42分别对应至不同的手势动作(例如以使用者的手部比出剪刀、石头、布等手势)。释放程序110于步骤S80中感测到使用者5的手势动作后,即可判断所感测到的手势动作是否对应至任一标记范围42。
再例如,由于本实施例中的气味释放装置2是VR/AR头盔或眼镜,因此释放程序110可以在开启数字相片4并显示于显示屏幕13时,同时感测使用者5的手部的位置,并且判断手部的位置是否可以对应至数字相片4上的任一标记范围42的显示位置(例如以使用者的食指指向任一标记范围42对应的显示位置)。
若释放程序110于步骤S82中判断使用者5的手势无法对应至任何标记范围42,则返回步骤S80,以通过电子装置1的感测器14持续感测使用者5的手势。若释放程序110判断使用者5的手势可对应至一个标记范围42,则进一步取得被添加在这个标记范围42上的气味信息121(步骤S84)。接着,执行图6所示的步骤S64,以由释放程序110将取得的气味信息121传递至气味释放装置2。最后,再进一步执行图6所示的步骤S66,即由气味释放装置2依据所接收的气味信息121产生并释放对应的气味。
于另一实施例中,所述气味释放装置2(即,VR/AR头盔或眼镜)与电子装置1亦可通过眼球追踪及语音控制方式来实现上述的气味释放动作。
参阅图12,为本发明的气味释放流程图的第四具体实施例。本实施例中,在电子装置1通过释放程序110开启了数字相片4后,释放程序110可通过电子装置1上的另一感测 器(例如相机镜头)追踪使用者5的眼球位置(步骤S90),藉此判断使用者5的眼球聚焦在数字相片4上的哪一个标记范围42。
若使用者5确认释放程序110所判断的该标记范围42无误,则可进一步以语音发出对应的释放指令,或是以手做出对应的手势。释放程序110可通过感测器(例如上述感测器14)感测并接收使用者5以语音发出的释放指令或是所做出的手势(步骤S92),并且判断该释放指令或该手势是否正确(步骤S94)。
若释放程序110判断该释放指令或该手势正确,则可选取使用者5的眼球所聚焦的该标记范围42,并且取得被添加在这个标记范围42上的气味信息121(步骤S96)。最后,释放程序110,再接着执行图6所示的步骤S64,以将取得的气味信息121传递至气味释放装置2,并控制气味释放装置2释放对应的气味。
通过本发明的添加系统及添加方法,使用者可以轻易地制作添加了气味信息的数字相片,并且与其他使用者分享。并且,其他使用者可以通过电子装置来显示数字相片,并且通过气味释放装置来释放被添加至数字相片中的气味,藉此同时得到视觉与嗅觉的双重体验。
以上所述仅为本发明的较佳具体实例,非因此即局限本发明的专利范围,故举凡运用本发明内容所为的等效变化,均同理皆包含于本发明的范围内,合予陈明。

Claims (19)

  1. 一种对数字相片添加气味信息的添加系统,包括:
    一数据库,储存多种气味信息;及
    一标记程序,安装并执行于一第一电子装置,该第一电子装置通过该标记程序开启一数字相片,并对该数字相片上接受一外部点选的一像素点进行定位;
    其中,该标记程序由该像素点向外寻找边界以确定一标记范围,并由该数据库中取得该多种气味信息其中的一种气味信息,接着将该标记范围与该气味信息进行关联,并且储存或传送标记完成的该数字相片。
  2. 如权利要求1所述的对数字相片添加气味信息的添加系统,其中该标记程序确定该标记范围后对该标记范围进行图像辨识,以识别该标记范围为一标记物件,并于该数据库中自动搜寻该标记物件所对应的该气味信息。
  3. 如权利要求1所述的对数字相片添加气味信息的添加系统,其中还包括一气味释放装置,连接该第一电子装置,该标记程序于确定该标记范围并取得对应的该气味信息后,控制该气味释放装置释放该气味信息对应的气味。
  4. 如权利要求1所述的对数字相片添加气味信息的添加系统,其中还包括一云端数据库,通过网络连接该第一电子装置,该标记程序通过该云端数据库更新该数据库中的该多种气味信息。
  5. 如权利要求1所述的对数字相片添加气味信息的添加系统,其中还包括:
    一气味释放装置;
    一释放程序,安装并执行于一第二电子装置,该第二电子装置连接该气味释放装置,并且通过该释放程序接收并开启标记完成的该数字相片;
    其中,该释放程序接受一外部操作以于该数字相片上选择该标记范围并取得添加在该标记范围上的该气味信息,并且该释放程序将该气味信息传送至该气味释放装置;
    其中,该气味释放装置依据所接收的该气味信息产生并释放对应的气味。
  6. 如权利要求5所述的对数字相片添加气味信息的添加系统,其中该释放程序接受该外部操作以于该数字相片上选择多个标记范围并取得分别添加在该多个标记范围上的多种该气味信息,该气味释放装置依据该多种气味信息混合释放多种气味。
  7. 如权利要求5所述的对数字相片添加气味信息的添加系统,其中该释放程序于选择该标记范围并取得添加在该标记范围上的该气味信息后,通过该第二电子装置的一感测器执行一人脸辨识动作并判断一使用者的鼻子是否靠近该第二电子装置,并且于判断该使用 者的鼻子靠近该第二电子装置时控制该气味释放装置依据该气味信息产生并释放对应的气味。
  8. 如权利要求5所述的对数字相片添加气味信息的添加系统,其中该气味释放装置为该第二电子装置的一背盖,或与该第二电子装置整合为一体。
  9. 如权利要求5所述的对数字相片添加气味信息的添加系统,其中该气味释放装置为一VR/AR头盔或眼镜,并且该第二电子装置设置于该气味释放装置上。
  10. 如权利要求5所述的对数字相片添加气味信息的添加系统,其中该释放程序于开启该数字相片后,通过该第二电子装置的一感测器感测一使用者的手势,并依据该手势于该数字相片上选取该标记范围。
  11. 如权利要求5所述的对数字相片添加气味信息的添加系统,其中该释放程序于开启该数字相片后,通过该第二电子装置的一感测器追踪一使用者的眼球位置,并且接收该使用者以语音发出的一释放指令或该使用者做出的一手势,并且该释放程序于判断该释放指令或该手势正确时,选取该使用者的眼球聚焦在于该数字相片上的该标记范围。
  12. 一种对数字相片添加气味信息的添加方法,包括:
    a)由安装在一第一电子装置中的一标记程序开启一数字相片;
    b)该标记程序对该数字相片上接受一外部点选的一像素点进行定位;
    c)该标记程序由该像素点向外寻找边界以确定一标记范围;
    d)该标记程序由一数据库中取得多种气味信息的其中之一,或是接收使用者自行调配的气味信息;
    e)该标记程序将该标记范围与所取得的该气味信息进行关联;及
    f)储存或传送标记完成的该数字相片。
  13. 如权利要求12所述的对数字相片添加气味信息的添加方法,其中还包括下列步骤:
    c1)于步骤c)后,该标记程序对该标记范围进行图像辨识,以识别该标记范围为一标记物件;
    其中,于步骤d)中,该标记程序系于该数据库中自动搜寻该标记物件所对应的该气味信息。
  14. 如权利要求12所述的对数字相片添加气味信息的添加方法,其中该第一电子装置连接一气味释放装置,并且还包括下列步骤:
    d1)于步骤d)后,该标记程序控制该气味释放装置释放该气味信息所对应的气味。
  15. 如权利要求12所述的对数字相片添加气味信息的添加方法,其中还包括下列步骤:
    g)由安装在一第二电子装置中的一释放程序接收并开启标记完成的该数字相片,其中该第二电子装置连接一气味释放装置;
    h)该释放程序接受一外部操作以于该数字相片上选择该标记范围并取得添加在该标记范围上的该气味信息;
    i)该释放程序传送该气味信息至该气味释放装置;及
    j)该气味释放装置依据所接收的该气味信息产生并释放对应的气味。
  16. 如权利要求15所述的对数字相片添加气味信息的添加方法,其中,步骤h)是由该释放程序接受该外部操作以于该数字相片上选择多个标记范围,并取得分别添加在该多个标记范围上的多种该气味信息,步骤i)是由该释放程序传送多种气味信息至该气味释放装置,步骤j)是由该气味释放装置依据该多种气味信息混合释放多种气味。
  17. 如权利要求15所述的对数字相片添加气味信息的添加方法,其中步骤h)包括下列步骤:
    h11)该释放程序接受该外部操作以于该数字相片上选择该标记范围;
    h12)取得添加在该标记范围上的该气味信息;
    h13)通过该第二电子装置的一感测器执行人脸辨识动作,以识别一使用者的鼻子;
    h14)判断该使用者的鼻子是否靠近该第二电子装置;及
    h15)于判断该使用者的鼻子靠近该第二电子装置时,执行步骤i)及步骤j)。
  18. 如权利要求15所述的对数字相片添加气味信息的添加方法,其中步骤h)包括下列步骤:
    h21)该释放程序通过该第二电子装置的一感测器感测一使用者的手势;
    h22)判断该使用者的手势是否对应至该数字相片上的任一该标记范围;及
    h23)于该使用者的手势对应至任一该标记范围时,依据该手势选择对应的该标记范围并取得添加在该标记范围上的该气味信息,并且执行步骤i)及步骤j)。
  19. 如权利要求15所述的对数字相片添加气味信息的添加方法,其中步骤h)包括下列步骤:
    h31)该释放程序通过该第二电子装置的一感测器追踪一使用者的眼球位置;
    h32)接收该使用者以语音发出的一释放指令或该使用者做出的一手势;
    h33)判断该释放指令或该手势是否正确;及
    h34)于判断该释放指令或该手势正确时,选取该使用者的眼球所聚焦的该标记范围并取得添加在该标记范围上的该气味信息,并且执行步骤i)及步骤j)。
PCT/CN2018/106757 2018-09-20 2018-09-20 对数字相片添加气味资讯的添加系統及添加方法 WO2020056683A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/CN2018/106757 WO2020056683A1 (zh) 2018-09-20 2018-09-20 对数字相片添加气味资讯的添加系統及添加方法
CN201880097844.9A CN112771471A (zh) 2018-09-20 2018-09-20 对数字相片添加气味资讯的添加系统及添加方法
EP18934032.6A EP3855287A4 (en) 2018-09-20 2018-09-20 ADDITION SYSTEM AND ADDITION PROCESS FOR ADDING SMELL INFORMATION TO DIGITAL PHOTOS
US17/205,829 US11914640B2 (en) 2018-09-20 2021-03-18 Adding system for adding scent information to digital photographs, and adding method for using the same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/106757 WO2020056683A1 (zh) 2018-09-20 2018-09-20 对数字相片添加气味资讯的添加系統及添加方法

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/205,829 Continuation US11914640B2 (en) 2018-09-20 2021-03-18 Adding system for adding scent information to digital photographs, and adding method for using the same

Publications (1)

Publication Number Publication Date
WO2020056683A1 true WO2020056683A1 (zh) 2020-03-26

Family

ID=69888168

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/106757 WO2020056683A1 (zh) 2018-09-20 2018-09-20 对数字相片添加气味资讯的添加系統及添加方法

Country Status (4)

Country Link
US (1) US11914640B2 (zh)
EP (1) EP3855287A4 (zh)
CN (1) CN112771471A (zh)
WO (1) WO2020056683A1 (zh)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111273971B (zh) * 2018-12-04 2022-07-29 腾讯科技(深圳)有限公司 视图中的信息处理方法、装置及存储介质
US11275453B1 (en) 2019-09-30 2022-03-15 Snap Inc. Smart ring for manipulating virtual objects displayed by a wearable device
US11798429B1 (en) 2020-05-04 2023-10-24 Snap Inc. Virtual tutorials for musical instruments with finger tracking in augmented reality
US11520399B2 (en) 2020-05-26 2022-12-06 Snap Inc. Interactive augmented reality experiences using positional tracking
US11925863B2 (en) * 2020-09-18 2024-03-12 Snap Inc. Tracking hand gestures for interactive game control in augmented reality
US11546505B2 (en) 2020-09-28 2023-01-03 Snap Inc. Touchless photo capture in response to detected hand gestures
US11740313B2 (en) 2020-12-30 2023-08-29 Snap Inc. Augmented reality precision tracking and display
US11531402B1 (en) 2021-02-25 2022-12-20 Snap Inc. Bimanual gestures for controlling virtual and graphical elements
EP4327185A1 (en) 2021-04-19 2024-02-28 Snap, Inc. Hand gestures for animating and controlling virtual and graphical elements
US11811964B1 (en) * 2022-07-19 2023-11-07 Snap Inc. Olfactory stickers for chat and AR-based messaging

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101459768A (zh) * 2007-12-11 2009-06-17 华晶科技股份有限公司 设定气味的装置及其方法
CN105164614A (zh) * 2013-09-26 2015-12-16 Lg电子株式会社 数字装置及其控制方法
US20160232131A1 (en) * 2015-02-11 2016-08-11 Google Inc. Methods, systems, and media for producing sensory outputs correlated with relevant information
CN107563906A (zh) * 2017-08-27 2018-01-09 重庆大学 一种具备气味体验功能的自助点餐系统

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5949522A (en) * 1996-07-03 1999-09-07 Manne; Joseph S. Multimedia linked scent delivery system
US6024783A (en) * 1998-06-09 2000-02-15 International Business Machines Corporation Aroma sensory stimulation in multimedia
US6337925B1 (en) * 2000-05-08 2002-01-08 Adobe Systems Incorporated Method for determining a border in a complex scene with applications to image masking
US8224078B2 (en) * 2000-11-06 2012-07-17 Nant Holdings Ip, Llc Image capture and identification system and process
GB2378257A (en) * 2001-07-31 2003-02-05 Hewlett Packard Co Camera with scent and taste representation means
US7657100B2 (en) * 2005-05-09 2010-02-02 Like.Com System and method for enabling image recognition and searching of images
US8885977B2 (en) * 2009-04-30 2014-11-11 Apple Inc. Automatically extending a boundary for an image to fully divide the image
KR20160092299A (ko) * 2015-01-27 2016-08-04 한국전자통신연구원 영상의 발향 정보를 추출하는 영상 분석 장치 및 발향 추출 방법
US10195076B2 (en) * 2015-10-23 2019-02-05 Eye Labs, LLC Head-mounted device providing diagnosis and treatment and multisensory experience
US9912831B2 (en) * 2015-12-31 2018-03-06 International Business Machines Corporation Sensory and cognitive milieu in photographs and videos
CN107491461A (zh) * 2016-06-13 2017-12-19 中兴通讯股份有限公司 一种记录和释放气味的方法及移动终端
WO2018022648A1 (en) * 2016-07-25 2018-02-01 Iteris, Inc. Image-based field boundary detection and identification
US20190025773A1 (en) * 2017-11-28 2019-01-24 Intel Corporation Deep learning-based real-time detection and correction of compromised sensors in autonomous machines

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101459768A (zh) * 2007-12-11 2009-06-17 华晶科技股份有限公司 设定气味的装置及其方法
CN105164614A (zh) * 2013-09-26 2015-12-16 Lg电子株式会社 数字装置及其控制方法
US20160232131A1 (en) * 2015-02-11 2016-08-11 Google Inc. Methods, systems, and media for producing sensory outputs correlated with relevant information
CN107563906A (zh) * 2017-08-27 2018-01-09 重庆大学 一种具备气味体验功能的自助点餐系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3855287A4 *

Also Published As

Publication number Publication date
EP3855287A4 (en) 2022-04-20
CN112771471A (zh) 2021-05-07
US20210209153A1 (en) 2021-07-08
EP3855287A1 (en) 2021-07-28
US11914640B2 (en) 2024-02-27

Similar Documents

Publication Publication Date Title
WO2020056683A1 (zh) 对数字相片添加气味资讯的添加系統及添加方法
CN105975560B (zh) 一种智能设备的题目搜索方法和装置
CN105320428B (zh) 用于提供图像的方法和设备
CN106462283B (zh) 计算设备上的字符识别
CN104202865B (zh) 一种智能照明装置
CN105264480B (zh) 用于在相机界面之间进行切换的设备、方法和图形用户界面
WO2017118075A1 (zh) 人机交互系统、方法及装置
KR102285699B1 (ko) 이미지를 디스플레이하는 사용자 단말기 및 이의 이미지 디스플레이 방법
WO2017096509A1 (zh) 一种显示、处理的方法及相关装置
CN105955641A (zh) 用于与对象交互的设备、方法和图形用户界面
CN109167871A (zh) 用于管理可控外部设备的用户界面
CN107637073A (zh) 视频记录和回放
WO2014000645A1 (zh) 一种基于图片的交互方法、装置和服务器
CN105144057A (zh) 用于根据具有模拟三维特征的控制图标的外观变化来移动光标的设备、方法和图形用户界面
CN109240582A (zh) 一种点读控制方法及智能设备
US20220207872A1 (en) Apparatus and method for processing prompt information
CN105260360B (zh) 命名实体的识别方法及装置
CN107710131A (zh) 内容浏览用户界面
JP2021108162A (ja) 映像検索情報提供方法、装置およびコンピュータプログラム
CN105512220B (zh) 图像页面输出方法及装置
TW201516755A (zh) 電子文件標記方法及裝置
CN108287903A (zh) 一种与投影相结合的搜题方法及智能笔
JP2015060579A (ja) 情報処理システム、情報処理方法および情報処理プログラム
CN104182780A (zh) 一种自动生成就餐点评的方法及终端设备
CN104221055A (zh) 图像处理装置、图像处理方法和记录介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18934032

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018934032

Country of ref document: EP

Effective date: 20210420