CN107025044B - Timing method and device - Google Patents

Timing method and device Download PDF

Info

Publication number
CN107025044B
CN107025044B CN201710203813.4A CN201710203813A CN107025044B CN 107025044 B CN107025044 B CN 107025044B CN 201710203813 A CN201710203813 A CN 201710203813A CN 107025044 B CN107025044 B CN 107025044B
Authority
CN
China
Prior art keywords
scene
preset
image
timing data
timing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710203813.4A
Other languages
Chinese (zh)
Other versions
CN107025044A (en
Inventor
林亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Original Assignee
Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yulong Computer Telecommunication Scientific Shenzhen Co Ltd filed Critical Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Priority to CN201710203813.4A priority Critical patent/CN107025044B/en
Publication of CN107025044A publication Critical patent/CN107025044A/en
Application granted granted Critical
Publication of CN107025044B publication Critical patent/CN107025044B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a timing method and a device thereof, wherein the timing method comprises the following steps: acquiring a scene image of a target object, extracting scene image characteristics carried by the scene image, and determining an application scene of the target object based on the scene image characteristics; when the scene image characteristics are matched with preset image characteristics of a preset scene image, acquiring timing data corresponding to the application scene; and outputting prompt information corresponding to the application scene when the prompt time point indicated by the timing data is reached. By adopting the method and the device, the application scene where the target object is located can be obtained based on the augmented reality technology, and further the data corresponding to the application scene can be obtained to realize automatic timing, so that the operation complexity of setting the timing function of the terminal equipment is reduced.

Description

Timing method and device
Technical Field
The invention relates to the technical field of augmented reality, in particular to a timing method and a device thereof.
Background
With the continuous development and improvement of science and technology, terminal devices such as smart phones, wearable devices and tablet computers have become an indispensable part of people's lives. The augmented reality technology is gradually integrated into various terminal devices, and intelligent elements close to reality are added for applications such as navigation, entertainment, photographing and communication. In the prior art, various terminal devices have a timing function, and people can manually set an alarm clock in the terminal device to realize the timing reminding function, however, in some occasions (such as running or cooking) where the manual setting of the alarm clock is inconvenient, the operation complexity of setting the timing function of the terminal device is increased.
Disclosure of Invention
In view of this, embodiments of the present invention provide a timing method and a device thereof, which can obtain an application scene where a target object is located based on an augmented reality technology, and further can obtain data corresponding to the application scene to implement automatic timing, thereby reducing the operation complexity for setting a timing function of a terminal device.
In order to solve the above technical problem, an embodiment of the present invention provides a timing method, where the method includes:
acquiring a scene image of a target object, extracting scene image characteristics carried by the scene image, and determining an application scene of the target object based on the scene image characteristics;
when the scene image characteristics are matched with preset image characteristics of a preset scene image, acquiring timing data corresponding to the application scene;
and outputting prompt information corresponding to the application scene when the prompt time point indicated by the timing data is reached.
Correspondingly, an embodiment of the present invention further provides a timing device, where the timing device includes:
the application scene determining unit is used for acquiring a scene image where a target object is located, extracting scene image characteristics carried by the scene image, and determining an application scene where the target object is located based on the scene image characteristics;
the timing data acquisition unit is used for acquiring timing data corresponding to the application scene when the scene image characteristics are matched with preset image characteristics of a preset scene image;
and the prompt information output unit is used for outputting the prompt information corresponding to the application scene when the prompt time point indicated by the timing data is reached.
In the embodiment of the invention, the scene image characteristics carried by the scene image are extracted by acquiring the scene image where the target object is located, the application scene where the target object is located is determined based on the scene image characteristics, the timing data corresponding to the application scene is acquired when the scene image characteristics are matched with the preset image characteristics of the preset scene image, and the prompt information corresponding to the application scene is output when the prompt time point indicated by the timing data is reached. The method comprises the steps of obtaining an application scene where a target object is located based on an augmented reality technology, obtaining timing data corresponding to the application scene, automatically realizing a timing function of the terminal device, and reducing the operation complexity of setting the timing function of the terminal device.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a timing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of another timing method provided by the embodiment of the invention;
fig. 3 is a schematic structural diagram of a timing device according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of another timing device provided in the embodiment of the present invention;
fig. 5 is a schematic structural diagram of a timing data acquisition unit according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a prompt information output unit according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of another timing device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
The timing method provided by the embodiment of the invention can be applied to an application scene that the terminal equipment realizes an automatic timing function by using an augmented reality technology (for example, after a scene of cooking is identified by using the augmented reality technology, water is added after the automatic timing is carried out for 5 minutes), for example: the method comprises the steps of obtaining a scene image of a target object, extracting scene image characteristics carried by the scene image, determining an application scene of the target object based on the scene image characteristics, obtaining timing data corresponding to the application scene when the scene image characteristics are matched with preset image characteristics of a preset scene image, and outputting prompt information corresponding to the application scene when a prompt time point indicated by the timing data is reached. The method comprises the steps of obtaining an application scene where a target object is located based on an augmented reality technology, obtaining timing data corresponding to the application scene, automatically realizing a timing function of the terminal device, and reducing the operation complexity of setting the timing function of the terminal device.
The timing device related in the embodiment of the invention can comprise a tablet computer, a smart phone, a wearable device, a Mobile Internet Device (MID) and other terminal devices which can have augmented reality technology and timing function.
The timing method provided by the embodiment of the present invention will be described in detail with reference to fig. 1 and 2.
Fig. 1 is a schematic flowchart of a timing method according to an embodiment of the present invention. As shown in fig. 1, the method described in the embodiment of the present invention may include the following steps S101 to S103.
S101, obtaining a scene image where a target object is located, extracting scene image features carried by the scene image, and determining an application scene where the target object is located based on the scene image features.
Specifically, the timing device may acquire a scene image of a target object, it is understood that the target object may be a person with a vital sign or an object without a vital sign, and the scene image in which the target object is located may include at least one scene identifier capable of determining an application scene in which the target object is located. For example, if an application scenario (e.g., a rice cooking scenario) where the user is located is to be determined, the scenario identification at least includes: a rice cooker, food materials, a heating device (a gas hood or an induction cooker), and the like.
Further, the timing device may extract a scene image feature carried by the scene image, and it may be understood that the scene image feature may be an external contour, a color mark, or position information of the scene identifier. For example, the scene image features may include the outline of a rice cooker, the relative position of the cooker and the heating device, the shape and color of food materials, and the like.
Further, the timing device may determine an application scene in which the target object is located based on the scene image feature. For example, the timing device may determine that the target object is in an application scene of cooking (e.g., cooking a pork rib) based on the contour of a rice cooker, the relative position of the cooker and the heating device, the shape and color of the food material, and the like. It is understood that, since the features included in the scene image features are at least one scene image feature, the timing device may determine at least one application scene where the target object is located based on a combination of any one or more image features of the at least one scene image feature. For example, the timing device may determine an application scenario for purchasing cookware based on the contour of a rice cooker, may determine an application scenario for brushing the pot based on the contour of the rice cooker and the shape of a faucet, or may determine an application scenario for cooking based on the contour of the rice cooker, the shape of the faucet, and the shape of a food material.
S102, when the scene image characteristics are matched with the preset image characteristics of the preset scene image, acquiring timing data corresponding to the application scene.
Specifically, when the scene image feature matches with a preset image feature carried by a preset scene image, the timing device may obtain timing data corresponding to the application scene. It is to be understood that the preset scene image feature may include a part or all of the scene image feature, and when the similarity between the scene image feature and the preset scene image feature corresponding to the preset scene image is the maximum, it may be determined that the scene image feature matches the preset scene image feature, and then an application scene may be uniquely determined from the at least one application scene, and it is understood that an application scene is the one of the at least one application scene that is most closely corresponding to the scene image feature.
Further, when the scene image feature matches with a preset image feature of a preset scene image, that is, after an application scene having a closest correspondence relationship with the scene image is determined, the timing device may obtain timing data corresponding to the application scene. For example, when the timing device determines that the application scene is a stewed spareribs scene, 10 minutes of timing data may be acquired, and it is understood that the timing data may be a time period after a time counting point is started with a current time point.
And S103, outputting prompt information corresponding to the application scene when the prompt time point indicated by the timing data is reached.
Specifically, when the timing data is a time period in which the current time point is a start timing point, the time point at which the time period ends may be the presentation time point. It is understood that when the prompt time point indicated by the timing data is reached, the timing device may output prompt information corresponding to the application scenario. For example, the current time point of acquiring the timing data is 11:00 click, and when the application scene is a spareribs stewing scene and the timing data is 10 minutes, the timing device may output a prompt message for adding water to the spareribs at 11: 10. It is understood that the output form of the timing device outputting the prompt message may be outputting a preset voice, outputting a preset music, or outputting a preset video.
In the embodiment of the invention, the scene image characteristics carried by the scene image are extracted by acquiring the scene image where the target object is located, the application scene where the target object is located is determined based on the scene image characteristics, the timing data corresponding to the application scene is acquired when the scene image characteristics are matched with the preset image characteristics of the preset scene image, and the prompt information corresponding to the application scene is output when the prompt time point indicated by the timing data is reached. The method comprises the steps of obtaining an application scene where a target object is located based on an augmented reality technology, obtaining timing data corresponding to the application scene, automatically realizing a timing function of the terminal device, and reducing the operation complexity of setting the timing function of the terminal device.
Referring to fig. 2, a flow chart of another timing method according to an embodiment of the invention is shown. As shown in fig. 2, the timing method in the present embodiment may include steps S201 to S209.
S201, obtaining a scene image where a target object is located, extracting scene image features carried by the scene image, and determining an application scene where the target object is located based on the scene image features.
Specifically, the timing device may acquire a scene image of a target object, it is understood that the target object may be a person with a vital sign or an object without a vital sign, and the scene image in which the target object is located may include at least one scene identifier capable of determining an application scene in which the target object is located. For example, if an application scene (for example, a scene of cooking rice) where the user is located is to be determined, the scene image at least includes scene identifiers such as a rice cooker, food materials, and a heating device (a gas hood or an induction cooker).
Further, the timing device may extract a scene image feature carried by the scene image, and it may be understood that the scene image feature may be an external contour of the scene identifier, a color mark, or position information. For example, the scene image features may include the outline of a rice cooker, the relative position of the cooker and the heating device, the shape and color of food materials, and the like.
Further, the timing device may determine an application scene in which the target object is located based on the scene image feature. For example, the timing device may determine that the user is in an application scenario of cooking (e.g., cooking a rib) based on the contour of the rice cooker, the relative position of the cooker and the heating device, the shape and color of the food material, and the like. It is to be understood that, since the scene image feature includes at least one feature, the timing device may determine at least one application scene where the target object is located based on a combination of any one or more features of the scene image feature, for example, the timing device may determine an application scene where a kitchen ware is purchased based on an outline of a rice cooker, may determine an application scene where a pot is being brushed based on an outline of a rice cooker and a shape of a faucet, or may determine an application scene where a meal is being cooked based on an outline of a rice cooker, a shape of a faucet, and a shape of a food material.
S202, when the scene image characteristics are matched with the preset image characteristics of the preset scene image, a preset scene list corresponding to the scene image is obtained.
Specifically, when the scene image feature matches a preset image feature of a preset scene image, the timing device may obtain a preset scene list corresponding to the scene image. It is to be understood that the scene image feature may include at least one scene image feature, and the preset scene image feature is at least one preset image feature carried by the preset scene image. It is understood that, as long as any one or more preset features of the at least one preset image feature are matched with any one or more image features of the at least one scene image feature, the timing device may acquire a preset application scene corresponding to the preset scene image.
It can be understood that, since at least one preset scene image carrying the preset scene image feature matched with the scene image feature may be provided, at least one preset application scene corresponding to the at least one preset scene image may also be provided. Further, the timing device may acquire at least one preset application scene to form a preset scene list. For example, the scene image features are: the outline of the rice cooker, the relative position of the cooker and the heating device, the shape of the faucet, the shape and the color of the food material, the timing device can acquire a preset scene list consisting of preset application scenes corresponding to preset scene images including any one or more scene images, and the preset scene list can include: a kitchen ware purchasing scene, a pot brushing scene, a food cooking scene and the like.
S203, acquiring the target application scene in the selected preset scene list, and acquiring timing data corresponding to the target application scene.
Specifically, the timing device may obtain a target application scene in the selected preset scene list, where the target application scene may be an application scene of the current target object selected by the user.
Further, after the target application scene is obtained, the timing device may obtain timing data corresponding to the target application scene. For example, when the target application scene acquired by the timing device is a stewed spareribs scene, the timing data of 10 minutes may be acquired, and it is understood that the timing data may be a time period after a current time point is a starting timing point.
It can be understood that by acquiring the target application scene in the preset scene list, the accuracy of acquiring the application scene is increased, and further, the error probability of acquiring the timing data caused by the selection error of the application scene is reduced.
And S204, acquiring time data set based on a preset setting mode, and updating the timing data based on the time data to obtain updated timing data corresponding to the application scene.
Specifically, after the timing device acquires the timing data, the timing data set by the user based on a preset setting mode can be acquired, and it can be understood that the timing device can acquire the time data set by the user by controlling the physical function (for example, increasing the time for decreasing the right eye by blinking the left eye, confirming the setting by blinking together or directly confirming by increasing, decreasing and confirming the voice space time). Further, the timing device may update the timing data based on the time data to obtain updated timing data corresponding to the application scene. For example, when the timing data is 10 minutes, if a user wants to put a new food material at a time 10:06, which is 6 minutes after the current time point (11:00 point) when the timing data is acquired, the time needs to be manually reduced by 4 minutes on the basis of 10 minutes, so as to obtain updated timing data of 6 minutes.
In the embodiment of the invention, the timing diversity of the timing equipment is increased by artificially setting the timing data.
S205, monitoring the behavior image of the target object in the application scene, and extracting the behavior image characteristics of the behavior image.
Specifically, the timing device may monitor a behavior image of the target object in the application scene, and it is understood that the behavior image may be an image captured by the timing device when the target object performs various behavior actions in the application scene.
Further, the timing device may extract behavior image features of the behavior image. It is to be understood that the timing device may determine the behavior state of the target object within the time period indicated by the timing data based on the behavior image feature while extracting the behavior image feature. Alternatively, the timing device may collectively process all the acquired behavior image data within a small time period before the presentation time point is reached, and simultaneously process the behavior image data within the small time period, for example, start processing the acquired behavior image data within the time period of 11:00 to 11:09 at the time of 11:09, and simultaneously process the behavior image data within the small time period of 11:09 to 11: 10.
S206, when the target time point indicated by the timing data is reached, monitoring the behavior image of the target object in the application scene is started, and the behavior image feature of the behavior image is extracted.
Specifically, the timing device may start to monitor a behavior image of the target object in the application scene when a target time point indicated by the timing data is reached, and extract a behavior image feature of the behavior image. It is to be understood that the target time point may be a time point that is prior to the cue time point and is spaced apart from the cue time point by a preset time period. For example, if the time point of acquiring the timing data is 11:00 o' clock and the timing data is 10 minutes, the presentation time point is 11:10 minutes, and 11:07 time spaced from the presentation time point by 3 minutes may be set as the target time point. It can be understood that the preset time period of the interval between the target time point and the prompt time point may be preset according to an actual application scenario, and before the target time point is reached, the timing device does not monitor the behavior of the target object.
Further, when the target time point is reached, the timing device may start to monitor a behavior image of the target object in the application scene, and extract a behavior image feature of the behavior image.
In the embodiment of the present invention, by acquiring the behavior image in the preset time period between the target time point and the prompt time point, the timing device only needs to acquire the behavior image in the preset time period indicated by the timing time, so that the power consumption of the timing device is reduced.
S207, determining the behavior state of the target object in the time period indicated by the timing data based on the behavior image characteristics.
Specifically, the timing device may determine the behavior state of the target object in the time period indicated by the timing data based on the behavior image feature, for example, determine the behavior state of the target object in the time period (for example, adding water to the food material being cooked or having no action) based on the behavior image feature acquired in the time period indicated by the timing data (for example, in the time period from 11:00 to 11: 10).
And S208, when the behavior state is not matched with the preset behavior state indicated by the preset behavior image and the prompt time point indicated by the timing data is reached, outputting prompt information aiming at the application scene.
It is to be understood that when the timing data is a time period starting from a current time point, a time point reaching the end of the time period may be the presentation time point. Specifically, when the behavior state does not match with a preset behavior state indicated by a preset behavior image and a prompt time point indicated by the timing data is reached, the timing device may output prompt information for the application scene. For example, the preset behavior state is to add water to the food material being cooked, and when the behavior state in the time period indicated by the timing data is no action, the timing device may output water adding prompt information for an application scenario of cooking a dish when the prompt time point indicated by the timing data is reached. It will be appreciated that the timing device may output voice information prompting the addition of water or motion and music information prompting the addition of water.
S209, replacing and saving the timing data corresponding to the application scene as the update timing data.
Specifically, when the timing device acquires time data set by a user according to a preset setting mode in a timing process, and updates the timing data based on the time data to obtain updated timing data corresponding to the application scene, the timing device may replace and store the timing data corresponding to the application scene as the updated timing data. It can be understood that, after the timing data corresponding to the application scene is replaced and stored as the update timing data, when the target object is in the same application scene again, the timing data corresponding to the application scene is the update timing data. For example: timing data corresponding to the application scene (stewed spareribs) in the timing device is 10 minutes, but the user updates the timing data to 15 minutes according to own preference and replaces and saves the 10 minutes to 15 minutes, and when the application scene of stewed spareribs is encountered again next time, the timing data acquired by the timing device is 15 minutes.
In the embodiment of the invention, the scene image characteristics carried by the scene image are extracted by acquiring the scene image where the target object is located, the application scene where the target object is located is determined based on the scene image characteristics, the timing data corresponding to the application scene is acquired when the scene image characteristics are matched with the preset image characteristics of the preset scene image, and the prompt information corresponding to the application scene is output when the prompt time point indicated by the timing data is reached. Acquiring an application scene where a target object is located based on an augmented reality technology, acquiring timing data corresponding to the application scene, automatically realizing a timing function of the terminal equipment, and reducing the operation complexity of setting the timing function of the terminal equipment; by acquiring the target application scene in the selected preset scene list, the accuracy of acquiring the application scene is increased, and the error probability of acquiring timing data caused by selection errors of the application scene is reduced; the timing diversity of the timing equipment is increased by the artificial setting of the timing data; by acquiring the behavior image in the preset time period between the target time point and the prompt time point, the power consumption of the timing device is reduced.
The timing device provided by the embodiment of the present invention will be described in detail with reference to fig. 3 to 6. It should be noted that the apparatuses shown in fig. 3-6 are used for executing the method according to the embodiment of the present invention shown in fig. 1 and 2, and for convenience of description, only the parts related to the embodiment of the present invention are shown, and details of the technology are not disclosed, please refer to the embodiment of the present invention shown in fig. 1 and 2.
Fig. 3 is a schematic structural diagram of a timing device according to an embodiment of the present invention. As shown in fig. 3, the timing device 1 according to the embodiment of the present invention may include: an application scene determining unit 11, a timing data acquiring unit 12, and a prompt information outputting unit 13.
The application scene determining unit 11 is configured to acquire a scene image where a target object is located, extract scene image features carried in the scene image, and determine an application scene where the target object is located based on the scene image features.
In a specific implementation, the application scene determining unit 11 may acquire a scene image of a target object, it may be understood that the target object may be a person with a vital sign or an object without a vital sign, and the scene image where the target object is located may include at least one scene identifier capable of determining an application scene where the target object is located. For example, if an application scenario (e.g., a rice cooking scenario) where the user is located is to be determined, the scenario identification at least includes: a rice cooker, food materials, a heating device (a gas hood or an induction cooker), and the like.
Further, the application scene determining unit 11 may extract a scene image feature carried by the scene image, and it is understood that the scene image feature may be an external contour, a color mark, or position information of the scene identifier. For example, the scene image features may include the outline of a rice cooker, the relative position of the cooker and the heating device, the shape and color of food materials, and the like.
Further, the application scene determining unit 11 may determine the application scene in which the target object is located based on the scene image feature. For example, the application scene determination unit 11 may determine that the application scene in which the target object is located is cooking (e.g., cooking a sparerib) based on the outline of a rice cooker, the relative position of the cooker and the heating device, the shape and color of the food material, and the like. It is to be understood that, since the features included in the scene image features are at least one scene image feature, the application scene determining unit 11 may determine at least one application scene in which the target object is located based on a combination of any one or more image features of the at least one scene image feature. For example, the application scenario determination unit 11 may determine an application scenario for purchasing kitchenware based on the contour of the rice cooker, may determine an application scenario for brushing the pot based on the contour of the rice cooker and the shape of the water tap, or may determine an application scenario for cooking based on the contour of the rice cooker, the shape of the water tap, and the shape of the food material.
A timing data obtaining unit 12, configured to obtain timing data corresponding to the application scene when the scene image feature matches a preset image feature of a preset scene image.
In a specific implementation, when the scene image feature matches a preset image feature carried by a preset scene image, the timing data obtaining unit 12 may obtain timing data corresponding to the application scene. It is to be understood that the preset scene image feature may include a part or all of the scene image feature, and when the similarity between the scene image feature and the preset scene image feature corresponding to the preset scene image is the maximum, it may be determined that the scene image feature matches the preset scene image feature, and then an application scene may be uniquely determined from the at least one application scene, and it is understood that an application scene is the one of the at least one application scene that is most closely corresponding to the scene image feature.
Further, when the scene image feature matches with a preset image feature of a preset scene image, that is, after an application scene most closely corresponding to the scene image is determined, the timing data acquiring unit 12 may acquire timing data corresponding to the application scene. For example, when the timing data acquisition unit 12 determines that the application scene is a stewed spareribs scene, it may acquire timing data of 10 minutes, and it is understood that the timing data may be a time period after a current time point is a start timing point.
And the prompt information output unit 13 is configured to output prompt information corresponding to the application scenario when the prompt time point indicated by the timing data is reached.
In a specific implementation, when the timing data is a time period taking a current time point as a start timing point, a time point reaching the end of the time period may be the prompt time point. It is to be understood that, when the cue time point indicated by the timing data is reached, the cue information output unit 13 may output cue information corresponding to the application scenario. For example, the current time point of acquiring the timing data is 11:00 o' clock, and when the application scene is a spareribs stewing scene and the timing data is 10 minutes, the prompt information output unit 13 may output the prompt information for adding water to the spareribs at 11: 10. It is understood that the output form of the prompt information output unit 13 outputting the prompt information may be outputting a preset voice, outputting a preset music, or outputting a preset video, etc.
In the embodiment of the invention, the scene image characteristics carried by the scene image are extracted by acquiring the scene image where the target object is located, the application scene where the target object is located is determined based on the scene image characteristics, the timing data corresponding to the application scene is acquired when the scene image characteristics are matched with the preset image characteristics of the preset scene image, and the prompt information corresponding to the application scene is output when the prompt time point indicated by the timing data is reached. The method comprises the steps of obtaining an application scene where a target object is located based on an augmented reality technology, obtaining timing data corresponding to the application scene, automatically realizing a timing function of the terminal device, and reducing the operation complexity of setting the timing function of the terminal device.
Referring to fig. 4, a schematic structural diagram of another timing device is provided in the embodiment of the present invention. As shown in fig. 4, the timing device 1 according to the embodiment of the present invention may include: an application scene determining unit 11, a timing data acquiring unit 12, a prompt information output unit 13, a data modifying unit 14, and a data saving unit 15.
The application scene determining unit 11 is configured to acquire a scene image where a target object is located, extract scene image features carried in the scene image, and determine an application scene where the target object is located based on the scene image features.
In a specific implementation, the application scene determining unit 11 may acquire a scene image of a target object, it may be understood that the target object may be a person with a vital sign or an object without a vital sign, and the scene image where the target object is located may include at least one scene identifier capable of determining an application scene where the target object is located. For example, if an application scenario (e.g., a rice cooking scenario) where the user is located is to be determined, the scenario identification at least includes: a rice cooker, food materials, a heating device (a gas hood or an induction cooker), and the like.
Further, the application scene determining unit 11 may extract a scene image feature carried by the scene image, and it is understood that the scene image feature may be an external contour, a color mark, or position information of the scene identifier. For example, the scene image features may include the outline of a rice cooker, the relative position of the cooker and the heating device, the shape and color of food materials, and the like.
Further, the application scene determining unit 11 may determine the application scene in which the target object is located based on the scene image feature. For example, the application scene determination unit 11 may determine that the application scene in which the target object is located is cooking (e.g., cooking a sparerib) based on the outline of a rice cooker, the relative position of the cooker and the heating device, the shape and color of the food material, and the like. It is to be understood that, since the features included in the scene image features are at least one scene image feature, the application scene determining unit 11 may determine at least one application scene in which the target object is located based on a combination of any one or more image features of the at least one scene image feature. For example, the application scenario determination unit 11 may determine an application scenario for purchasing kitchenware based on the contour of the rice cooker, may determine an application scenario for brushing the pot based on the contour of the rice cooker and the shape of the water tap, or may determine an application scenario for cooking based on the contour of the rice cooker, the shape of the water tap, and the shape of the food material.
A timing data obtaining unit 12, configured to obtain timing data corresponding to the application scene when the scene image feature matches a preset image feature of a preset scene image.
In a specific implementation, when the scene image feature matches a preset image feature carried by a preset scene image, the timing data obtaining unit 12 may obtain timing data corresponding to the application scene.
Referring to fig. 5, a schematic structural diagram of the timing data obtaining unit 12 is provided for the embodiment of the present invention. As shown in fig. 5, the timing data acquisition unit 12 may include:
the list data subunit 121 is configured to, when the scene image feature matches a preset image feature of a preset scene image, obtain a preset scene list corresponding to the scene image.
In a specific implementation, when the scene image feature matches a preset image feature of a preset scene image, the list data subunit 121 may obtain a preset scene list corresponding to the scene image. It is to be understood that the scene image feature may include at least one scene image feature, and the preset scene image feature is at least one preset image feature carried by the preset scene image. It is understood that the list data subunit 121 may acquire the preset application scene corresponding to the preset scene image as long as any one or more preset features of the at least one preset image feature are matched with any one or more image features of the at least one scene image feature.
It can be understood that, since at least one preset scene image carrying the preset scene image feature matched with the scene image feature may be provided, at least one preset application scene corresponding to the at least one preset scene image may also be provided. Further, the list data subunit 121 may obtain at least one preset application scene to form a preset scene list. For example, the scene image features are: the outline of the rice cooker, the relative position between the cooker and the heating device, the shape of the faucet, and the shape and color of the food material, the list data subunit 121 may acquire a preset scene list composed of preset application scenes corresponding to preset scene images including any one or more of the scene images, where the preset scene list may include: a kitchen ware purchasing scene, a pot brushing scene, a food cooking scene and the like.
A timing data obtaining subunit 122, configured to obtain a target application scene in the selected preset scene list, and obtain timing data corresponding to the target application scene.
In a specific implementation, the timing data obtaining subunit 122 may obtain a target application scene in the selected preset scene list, where the target application scene may be an application scene of the current target object selected by the user.
Further, after acquiring the target application scene, the timing data acquiring subunit 122 may acquire timing data corresponding to the target application scene. For example, when the target application scene acquired by the timing data acquisition subunit 122 is a spareribs stewing scene, 10 minutes of timing data may be acquired, and it is understood that the timing data may be a time period after a counting time point which is a current time point.
It can be understood that by acquiring the target application scene in the preset scene list, the accuracy of acquiring the application scene is increased, and further, the error probability of acquiring the timing data caused by the selection error of the application scene is reduced.
And the data updating unit 14 is configured to acquire time data set based on a preset setting manner, and update the timing data based on the time data to obtain updated timing data corresponding to the application scene.
In a specific implementation, after the timing data obtaining unit 12 obtains the timing data, the data modifying unit 14 may obtain time data set by the user based on a preset setting mode, and it is understood that the data modifying unit 14 may obtain time data set by the user by controlling physical functions (for example, increasing time to blink right eye by blinking left eye, decreasing time to blink together, confirming setting by blinking together, or directly confirming by increasing, decreasing, and confirming of voice space time). Further, the data modification unit 14 may update the timing data based on the time data to obtain updated timing data corresponding to the application scene. For example, when the timing data is 10 minutes, if a user wants to put a new food material at a time 10:06, which is 6 minutes after the current time point (11:00 point) when the timing data is acquired, the time needs to be manually reduced by 4 minutes on the basis of 10 minutes, so as to obtain updated timing data of 6 minutes.
In the embodiment of the invention, the diversity of the timing device 1 is increased by the artificial setting of the timing data.
And the prompt information output unit 13 is configured to output prompt information corresponding to the application scenario when the prompt time point indicated by the timing data is reached.
In a specific implementation, when the prompt time point indicated by the timing data is reached, the prompt information output unit 13 may output the prompt information corresponding to the application scenario.
Referring to fig. 6, a schematic structural diagram of the prompt information output unit 13 is provided for the embodiment of the present invention. As shown in fig. 6, the prompt information output unit 13 may include:
and the feature extraction subunit 131 is configured to monitor a behavior image of the target object in the application scene, and extract a behavior image feature of the behavior image.
In a specific implementation, the feature extraction subunit 131 may monitor a behavior image of the target object in the application scene, and it is understood that the behavior image may be an image captured by the timing device when the target object performs various behavior actions in the application scene.
Further, the feature extraction subunit 131 may extract a behavior image feature of the behavior image. It is to be understood that the feature extraction subunit 131 may determine, while extracting the behavior image feature, a behavior state of the target object within the time period indicated by the timing data based on the behavior image feature. Alternatively, the feature extraction sub-unit 131 may collectively process all the acquired behavior image data in a small period of time before the presentation time point is reached, while processing the behavior image data in the small period of time, for example, start processing the acquired behavior image data in 11:00-11:09 minutes at the time of 11:09, while processing the behavior image data in 11:09-11:10 in the small period of time.
Optionally, the feature extraction subunit 131 is further configured to, when the target time point indicated by the timing data is reached, start to monitor a behavior image of the target object in the application scene, and extract a behavior image feature of the behavior image.
In a specific implementation, the feature extraction subunit 131 may start to monitor a behavior image of the target object in the application scene when a target time point indicated by the timing data is reached, and extract a behavior image feature of the behavior image. It is to be understood that the target time point may be a time point that is prior to the cue time point and is spaced apart from the cue time point by a preset time period. For example, if the time point of acquiring the timing data is 11:00 o' clock and the timing data is 10 minutes, the presentation time point is 11:10 minutes, and 11:07 time spaced from the presentation time point by 3 minutes may be set as the target time point. It can be understood that the preset time period of the interval between the target time point and the prompt time point may be preset according to an actual application scenario, and the feature extraction subunit 131 does not monitor the behavior of the target object until the target time point is reached.
Further, when the target time point is reached, the feature extraction subunit 131 may start to monitor a behavior image of the target object in the application scene, and extract a behavior image feature of the behavior image.
In this embodiment of the present invention, by acquiring the behavior image in the preset time period between the target time point and the prompt time point, the feature extraction subunit 131 only needs to acquire the behavior image in the preset time period indicated by the timing time, so as to reduce the power consumption of the timing device.
A state determining subunit 132, configured to determine a behavior state of the target object within the time period indicated by the timing data based on the behavior image feature.
In a specific implementation, the state determining subunit 132 may determine the behavior state of the target object in the time period indicated by the timing data based on the behavior image feature, for example, determine the behavior state of the target object in the time period (for example, adding water to the food material being cooked or having no action) in the time period indicated by the timing data (for example, in the time period of 11:00 to 11: 10).
An information output subunit 133, configured to output a prompt information for the application scene when the behavior state does not match the preset behavior state indicated by the preset behavior image and a prompt time point indicated by the timing data is reached.
It is to be understood that when the timing data is a time period starting from a current time point, a time point reaching the end of the time period may be the presentation time point. In a specific implementation, when the behavior state is not matched with the preset behavior state indicated by the preset behavior image and a prompt time point indicated by the timing data is reached, the information output subunit 133 may output prompt information for the application scene. For example, the preset behavior state is to add water to the food material being cooked, and when the behavior state in the time period indicated by the timing data is no action, the information output subunit 133 may output water adding prompt information for an application scenario of cooking a dish when the prompt time point indicated by the timing data is reached. It is to be understood that the information output subunit 133 may output voice information prompting to add water or action and music information prompting to add water.
A data replacement saving unit 15, configured to replace and save the timing data corresponding to the application scenario as the update timing data.
In a specific implementation, when the data modification unit 14 obtains time data set by a user according to a preset setting manner in a timing process, and updates the timing data based on the time data to obtain update timing data corresponding to the application scene, the data storage unit 15 may replace and store the timing data corresponding to the application scene as the update timing data. It can be understood that, after the timing data corresponding to the application scene is replaced and stored as the update timing data, when the target object is in the same application scene again, the timing data corresponding to the application scene is the update timing data. For example: the timing data corresponding to the application scene (stewed spareribs) in the timing device 1 is 10 minutes, but the user updates the timing data to 15 minutes according to the preference of the user and replaces and saves the 10 minutes to 15 minutes, and when the application scene of stewed spareribs is encountered again next time, the timing data acquired by the timing data acquisition unit 12 is 15 minutes.
In the embodiment of the invention, the scene image characteristics carried by the scene image are extracted by acquiring the scene image where the target object is located, the application scene where the target object is located is determined based on the scene image characteristics, the timing data corresponding to the application scene is acquired when the scene image characteristics are matched with the preset image characteristics of the preset scene image, and the prompt information corresponding to the application scene is output when the prompt time point indicated by the timing data is reached. Acquiring an application scene where a target object is located based on an augmented reality technology, acquiring timing data corresponding to the application scene, automatically realizing a timing function of the terminal equipment, and reducing the operation complexity of setting the timing function of the terminal equipment; by acquiring the target application scene in the selected preset scene list, the accuracy of acquiring the application scene is increased, and the error probability of acquiring timing data caused by selection errors of the application scene is reduced; the timing diversity of the timing equipment is increased by the artificial setting of the timing data; by acquiring the behavior image in the preset time period between the target time point and the prompt time point, the power consumption of the timing device is reduced.
Fig. 7 is a schematic structural diagram of another timing device according to an embodiment of the present invention. As shown in fig. 7, the timing apparatus 1000 may include: at least one processor 1001, such as a CPU, at least one network interface 1004, a user interface 1003, memory 1005, at least one communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display) and a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a standard wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one disk memory. The memory 1005 may optionally be at least one memory device located remotely from the processor 1001. As shown in fig. 7, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a timing program.
In the timing device 1000 shown in fig. 7, the user interface 1003 is mainly used as an interface for providing input for a user, and acquiring data input by the user; and the processor 1001 may be configured to call the timing program stored in the memory 1005, and specifically perform the following operations:
acquiring a scene image of a target object, extracting scene image characteristics carried by the scene image, and determining an application scene of the target object based on the scene image characteristics;
when the scene image characteristics are matched with preset image characteristics of a preset scene image, acquiring timing data corresponding to the application scene;
and outputting prompt information corresponding to the application scene when the prompt time point indicated by the timing data is reached.
In an embodiment, when the processor 1001 acquires the timing data corresponding to the application scene when the scene image feature is matched with a preset image feature of a preset scene image, the following operations are specifically performed:
when the scene image characteristics are matched with preset image characteristics of a preset scene image, acquiring a preset scene list corresponding to the scene image;
and acquiring a target application scene in the selected preset scene list, and acquiring timing data corresponding to the target application scene.
In one embodiment, when the processor 1001 outputs the prompt information for the application scenario when the prompt time point indicated by the timing data is reached, specifically:
monitoring a behavior image of the target object in the application scene, and extracting behavior image characteristics of the behavior image;
determining a behavior state of the target object within a time period indicated by the timing data based on the behavior image features;
and when the behavior state is not matched with the preset behavior state indicated by the preset behavior image and the prompt time point indicated by the timing data is reached, outputting prompt information aiming at the application scene.
In one embodiment, the processor 1001 specifically performs the following operations when monitoring a behavior image of the target object in the application scene and extracting a behavior image feature of the behavior image when a prompt time point indicated by the timing data is reached:
when a target time point indicated by the timing data is reached, starting to monitor a behavior image of the target object in the application scene, and extracting behavior image features of the behavior image;
and the target time point is a time point which is before the prompt time point and is separated from the prompt time point by a preset time period.
In one embodiment, after the processor 1001 obtains the timing data corresponding to the application scene when the scene image feature matches with a preset image feature of a preset scene image, the processor further performs the following operations:
and acquiring time data set based on a preset setting mode, and updating the timing data based on the time data to obtain updated timing data corresponding to the application scene.
In one embodiment, after the processor 1001 outputs the prompt information corresponding to the application scenario when the prompt time point indicated by the timing data is reached, the following operations are further performed:
and replacing and saving the timing data corresponding to the application scene as the updating timing data.
In the embodiment of the invention, the scene image characteristics carried by the scene image are extracted by acquiring the scene image where the target object is located, the application scene where the target object is located is determined based on the scene image characteristics, the timing data corresponding to the application scene is acquired when the scene image characteristics are matched with the preset image characteristics of the preset scene image, and the prompt information corresponding to the application scene is output when the prompt time point indicated by the timing data is reached. Acquiring an application scene where a target object is located based on an augmented reality technology, acquiring timing data corresponding to the application scene, automatically realizing a timing function of the terminal equipment, and reducing the operation complexity of setting the timing function of the terminal equipment; by acquiring the target application scene in the selected preset scene list, the accuracy of acquiring the application scene is increased, and the error probability of acquiring timing data caused by selection errors of the application scene is reduced; the timing diversity of the timing equipment is increased by the artificial setting of the timing data; by acquiring the behavior image in the preset time period between the target time point and the prompt time point, the power consumption of the timing device is reduced.
It should be noted that, for the sake of simplicity, the above method embodiments are described as a series of acts, but those skilled in the art should understand that the present invention is not limited by the described order of acts, and some steps may be performed in other orders or simultaneously. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no single feature or element is essential to the invention. In the above embodiments, the descriptions of the embodiments have respective emphasis, and reference may be made to related descriptions of other embodiments for parts that are not described in detail in a certain embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. Wherein the storage medium comprises: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (6)

1. A timing method, comprising:
acquiring a scene image of a target object, extracting scene image characteristics carried by the scene image, and determining an application scene of the target object based on the scene image characteristics; the scene image in which the target object is located comprises at least one scene identifier capable of determining an application scene in which the target object is located, and the scene image features comprise an outer contour of the scene identifier, and/or a color identifier, and/or position information;
when the scene image characteristics are matched with preset image characteristics of a preset scene image, acquiring timing data corresponding to the application scene;
acquiring time data set based on a preset setting mode, and updating the timing data based on the time data to obtain updated timing data corresponding to the application scene; wherein the preset setting mode comprises setting by controlling the body function;
when the prompt time point indicated by the timing data is reached, outputting prompt information corresponding to the application scene, wherein the prompt information comprises: when a target time point indicated by the timing data is reached, starting to monitor a behavior image of the target object in the application scene, extracting behavior image features of the behavior image, determining a behavior state of the target object in a time period indicated by the timing data based on the behavior image features, and when the behavior state is not matched with a preset behavior state indicated by a preset behavior image and reaches a prompt time point indicated by the timing data, outputting prompt information aiming at the application scene; and the target time point is a time point which is before the prompt time point and is separated from the prompt time point by a preset time period.
2. The method of claim 1, wherein when the scene image feature matches a preset image feature of a preset scene image, acquiring timing data corresponding to the application scene comprises:
when the scene image characteristics are matched with preset image characteristics of a preset scene image, acquiring a preset scene list corresponding to the scene image;
and acquiring a target application scene in the selected preset scene list, and acquiring timing data corresponding to the target application scene.
3. The method of claim 1, wherein after outputting the prompt information corresponding to the application scenario when the prompt time point indicated by the timing data is reached, further comprising:
and replacing and saving the timing data corresponding to the application scene as the updating timing data.
4. A timing device, comprising:
the application scene determining unit is used for acquiring a scene image where a target object is located, extracting scene image characteristics carried by the scene image, and determining an application scene where the target object is located based on the scene image characteristics; the scene image in which the target object is located comprises at least one scene identifier capable of determining an application scene in which the target object is located, and the scene image features comprise an outer contour of the scene identifier, and/or a color identifier, and/or position information;
the timing data acquisition unit is used for acquiring timing data corresponding to the application scene when the scene image characteristics are matched with preset image characteristics of a preset scene image;
the data updating unit is used for acquiring time data set based on a preset setting mode and updating the timing data based on the time data to obtain updated timing data corresponding to the application scene; wherein the preset setting mode comprises setting by controlling the body function;
the prompt information output unit is used for outputting prompt information corresponding to the application scene when the prompt time point indicated by the timing data is reached;
the prompt information output unit includes:
the characteristic extraction subunit is configured to, when a target time point indicated by the timing data is reached, start to monitor a behavior image of the target object in the application scene, and extract a behavior image characteristic of the behavior image, where the target time point is a time point that is before the prompt time point and is separated from the prompt time point by a preset time period;
a state determination subunit, configured to determine, based on the behavior image feature, a behavior state of the target object within a time period indicated by the timing data;
and the information output subunit is used for outputting prompt information aiming at the application scene when the behavior state is not matched with the preset behavior state indicated by the preset behavior image and the prompt time point indicated by the timing data is reached.
5. The apparatus of claim 4, wherein the timing data acquisition unit comprises:
the list acquiring subunit is configured to acquire a preset scene list corresponding to the scene image when the scene image feature matches a preset image feature of a preset scene image;
and the timing data acquisition subunit is used for acquiring the target application scene in the selected preset scene list and acquiring the timing data corresponding to the target application scene.
6. The apparatus of claim 4, further comprising:
and the data replacement and storage unit is used for replacing and storing the timing data corresponding to the application scene as the updating timing data.
CN201710203813.4A 2017-03-30 2017-03-30 Timing method and device Active CN107025044B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710203813.4A CN107025044B (en) 2017-03-30 2017-03-30 Timing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710203813.4A CN107025044B (en) 2017-03-30 2017-03-30 Timing method and device

Publications (2)

Publication Number Publication Date
CN107025044A CN107025044A (en) 2017-08-08
CN107025044B true CN107025044B (en) 2021-02-23

Family

ID=59526385

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710203813.4A Active CN107025044B (en) 2017-03-30 2017-03-30 Timing method and device

Country Status (1)

Country Link
CN (1) CN107025044B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109725805B (en) * 2018-12-28 2021-07-30 维沃移动通信有限公司 Alarm clock setting method and terminal
CN111538420B (en) * 2020-04-22 2023-09-29 掌阅科技股份有限公司 Electronic book page display method, electronic equipment and computer storage medium
CN113132625B (en) * 2021-03-11 2023-05-12 宇龙计算机通信科技(深圳)有限公司 Scene image acquisition method, storage medium and equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102194227A (en) * 2010-03-15 2011-09-21 欧姆龙株式会社 Image attribute discrimination apparatus, attribute discrimination support apparatus, image attribute discrimination method and attribute discrimination support apparatus controlling method
CN103513768A (en) * 2013-08-30 2014-01-15 展讯通信(上海)有限公司 Control method and device based on posture changes of mobile terminal and mobile terminal
CN105160326A (en) * 2015-09-15 2015-12-16 杭州中威电子股份有限公司 Automatic highway parking detection method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104914732B (en) * 2015-06-29 2018-01-12 北京金山安全软件有限公司 Method and device for setting timing reminding
CN106096902A (en) * 2016-04-29 2016-11-09 乐视控股(北京)有限公司 A kind of Intelligent Establishment method and device of reminder events
CN106375448A (en) * 2016-09-05 2017-02-01 腾讯科技(深圳)有限公司 Image processing method, device and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102194227A (en) * 2010-03-15 2011-09-21 欧姆龙株式会社 Image attribute discrimination apparatus, attribute discrimination support apparatus, image attribute discrimination method and attribute discrimination support apparatus controlling method
CN103513768A (en) * 2013-08-30 2014-01-15 展讯通信(上海)有限公司 Control method and device based on posture changes of mobile terminal and mobile terminal
CN105160326A (en) * 2015-09-15 2015-12-16 杭州中威电子股份有限公司 Automatic highway parking detection method and device

Also Published As

Publication number Publication date
CN107025044A (en) 2017-08-08

Similar Documents

Publication Publication Date Title
CN110083730B (en) Method and apparatus for managing images using voice tags
EP3104590A1 (en) Electronic device and method for displaying image therein
EP3382632B1 (en) Device for providing information related to object in image
CN106791215B (en) Alarm clock setting method and mobile terminal with alarm clock function
KR102326154B1 (en) Method for displaying of low power mode and electronic device supporting the same
CN105388931B (en) The devices and methods therefor of undulated control device performance based on internal temperature
CN107025044B (en) Timing method and device
CN108012389B (en) Light adjusting method, terminal device and computer readable storage medium
US10917552B2 (en) Photographing method using external electronic device and electronic device supporting the same
US20170055755A1 (en) Method, device and electronic device for heating an inner cooking pan of an induction cooking equipment and computer-readable medium
WO2015001437A1 (en) Image processing method and apparatus, and electronic device
KR102272108B1 (en) Image processing apparatus and method
KR102547104B1 (en) Electronic device and method for processing plural images
CN105843681B (en) Mobile terminal and operating system switching method thereof
KR20160121145A (en) Apparatus And Method For Setting A Camera
KR102386893B1 (en) Method for securing image data and electronic device implementing the same
KR102354055B1 (en) Electronic Device Based On Optical Object Recognition AND Method For Operating The Same
US11145047B2 (en) Method for synthesizing image and an electronic device using the same
KR20160055534A (en) Method for content adaptation based on ambient environment in electronic device and the electronic device thereof
CN109151318A (en) A kind of image processing method, device and computer storage medium
KR20190059629A (en) Electronic device and the Method for providing Augmented Reality Service thereof
EP3404503B1 (en) Method and apparatus for prompting remaining service life of cooking device
CN114415530A (en) Control method, control device, electronic equipment and storage medium
US20160055391A1 (en) Method and apparatus for extracting a region of interest
CN109448132B (en) Display control method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant