CN111582002A - Scene recognition method and device and electronic equipment - Google Patents
Scene recognition method and device and electronic equipment Download PDFInfo
- Publication number
- CN111582002A CN111582002A CN201910117348.1A CN201910117348A CN111582002A CN 111582002 A CN111582002 A CN 111582002A CN 201910117348 A CN201910117348 A CN 201910117348A CN 111582002 A CN111582002 A CN 111582002A
- Authority
- CN
- China
- Prior art keywords
- service information
- scene
- information
- image
- object service
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000013473 artificial intelligence Methods 0.000 claims abstract description 38
- 230000001960 triggered effect Effects 0.000 claims description 22
- 238000000605 extraction Methods 0.000 claims description 17
- 238000004891 communication Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 9
- 238000012545 processing Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 6
- 230000005236 sound signal Effects 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 235000013305 food Nutrition 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0281—Customer communication at a business location, e.g. providing product or service information, consulting
Abstract
The disclosure relates to a scene recognition method and device and electronic equipment. The method comprises the following steps: acquiring an image in a preview interface of an image acquisition component; extracting and identifying the features of the image to obtain a set identifier; acquiring the scene to which the object corresponding to the set identifier belongs and object service information from an artificial intelligence scene recognition library; and outputting the scene to which the object belongs and the object service information. According to the technical scheme, the corresponding scene to which the object belongs can be acquired based on the set identification in the image, the expansion of the range and accuracy of intelligent scene recognition is facilitated, and the use experience of a user can be improved by outputting object service information such as pop-up commodity brand introduction and commodity store address information.
Description
Technical Field
The present disclosure relates to the field of intelligent scene recognition technologies, and in particular, to a scene recognition method and apparatus, and an electronic device.
Background
Currently, Artificial Intelligence (AI) scene recognition is an important function of a digital camera, and more users like to take high-quality photos through an AI scene recognition function of the camera, and the AI scene recognition can recognize various scene modes, but may not accurately recognize an article that the user wants to recognize when recognizing an article scene, for example, when the user takes a coffee cup using the intelligent scene recognition function of the digital camera, the coffee cup may not be accurately recognized as a food article. Therefore, the AI scene recognition function in the related art has a problem of low recognition accuracy rate.
Disclosure of Invention
In order to overcome the problems in the related art, embodiments of the present disclosure provide a scene recognition method, a scene recognition device, and an electronic device, so as to solve the problems of low accuracy and low range of scene recognition.
According to a first aspect of the embodiments of the present disclosure, a scene recognition method is provided, which may include:
acquiring an image in a preview interface of an image acquisition component;
extracting and identifying the features of the image to obtain a set identifier;
acquiring the scene to which the object corresponding to the set identifier belongs and object service information from an artificial intelligence scene recognition library;
and outputting the scene to which the object belongs and the object service information.
In an embodiment, the extracting and recognizing the features of the image to obtain the setting identifier includes:
receiving focusing operation triggered by a user in the image;
determining a target object to be recognized based on the focusing operation;
and performing feature extraction and recognition on the target object in the image to obtain the set identifier.
In an embodiment, the obtaining the scene to which the object corresponding to the identifier belongs and the object service information in the artificial intelligence scene recognition library includes:
inquiring the scene to which the object corresponding to the identifier belongs and the object service information in a local artificial intelligence scene recognition library; alternatively, the first and second electrodes may be,
and inquiring the scene to which the object corresponding to the identifier belongs and the object service information in an artificial intelligent scene recognition library in the cloud server.
In an embodiment, the method further comprises:
receiving an operation triggered by the user aiming at the object service information selection;
if the operation indicates that the user does not need to view the object service information, quitting the display interface of the object service information;
if the operation indicates that the user wants to acquire the detailed information corresponding to the object service information, acquiring the detailed information corresponding to the object service information;
and outputting the detail information.
In one embodiment, the object service information includes one or any combination of the following information: description information, positioning information, path information, alternative information of the object.
According to a second aspect of the embodiments of the present disclosure, there is provided a scene recognition apparatus, the apparatus including:
the first acquisition module is configured to acquire an image in a preview interface of the image acquisition component;
the image identification module is configured to extract and identify the features of the image acquired by the first acquisition module to obtain a set identifier;
the second acquisition module is configured to acquire the scene to which the object corresponding to the set identifier and the object service information, which are acquired by the image recognition module, from the artificial intelligence scene recognition library;
and the first output module is configured to output the scene to which the object belongs and the object service information acquired by the second acquisition module.
In one embodiment, the image recognition module comprises:
a receiving sub-module configured to receive a focusing operation triggered by a user in the image;
a determination submodule configured to determine a target object to be identified based on the focusing operation received by the reception submodule;
and the recognition submodule is configured to extract and recognize the features of the target object in the image to obtain the set identification.
In one embodiment, the second obtaining module includes:
the first acquisition sub-module is configured to query the scene to which the object corresponding to the identifier belongs and the object service information in a local artificial intelligence scene recognition library; alternatively, the first and second electrodes may be,
and the second acquisition submodule is configured to query the scene to which the object corresponding to the identifier belongs and the object service information in an artificial intelligence scene recognition library in the cloud server.
In one embodiment, the apparatus further comprises:
a receiving module configured to receive an operation triggered by a user selection for the object service information;
the quitting module is configured to quit the display interface of the object service information if the operation received by the receiving module indicates that the user does not need to view the object service information;
a third obtaining module configured to obtain detailed information corresponding to the object service information based on the operation instruction received by the receiving module that the user wants to obtain the detailed information corresponding to the object service information;
a second output module configured to output the detail information acquired by the third acquisition module.
In one embodiment, the object service information includes one or any combination of the following information: description information, positioning information, path information, alternative information of the object.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring an image in a preview interface of an image acquisition component;
extracting and identifying the features of the image to obtain a set identifier;
acquiring the scene to which the object corresponding to the set identifier belongs and object service information from an artificial intelligence scene recognition library;
and outputting the scene to which the object belongs and the object service information.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: when the image acquisition component is used, the image in the preview interface can be directly subjected to feature extraction and recognition, when the set identifier is recognized, the scene to which the object corresponding to the set identifier belongs and the object service information can be matched from the AI scene recognition library, and the scene to which the object belongs and the object service information are output. When AI scene recognition is carried out, corresponding scenes where objects belong can be obtained based on set marks in the image, such as commodity trademark logo, toilet marks, traffic marks and the like, the method is beneficial to expanding the recognition range and recognition accuracy of intelligent scene recognition, and the use experience of users can be improved by outputting object service information, such as popup commodity brand introduction, commodity store address information and the like.
In addition, the focusing operation is triggered on the target object in the image, the set identification corresponding to the target object can be identified, and then only the object service information and the scene category which the user wants to know are obtained, so that the scene identification is more intelligent.
When the operation triggered by the user on the object service information is received, the detail information of the target object can be output to the user, for example, if the target object is a brand commodity, the object service information can only display commodity store address information of the brand commodity, the object service information can also include replaceable information of the brand commodity, that is, information of a similar commodity capable of replacing the object, for example, if the object is a coffee A brand, coffee of a coffee B brand and related information (such as store address information) of the coffee B brand with the same price or the same taste can be simultaneously provided in the object service information, and the business hours, discount information and the like of the current commodity store can be further prompted to the user through the detail information, so that the functions of providing life services for the user from multiple aspects and expanding intelligent scene recognition are realized.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a flowchart illustrating a scene recognition method according to an exemplary embodiment.
FIG. 2 is a flow diagram illustrating a method of scene recognition, according to an example embodiment.
Fig. 3 is a flowchart illustrating a scene recognition method according to an exemplary embodiment.
Fig. 4 is a block diagram illustrating a scene recognition apparatus according to an exemplary embodiment.
Fig. 5 is a block diagram illustrating another scene recognition apparatus according to an exemplary embodiment.
Fig. 6 is a block diagram illustrating a suitable scene recognition apparatus according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
FIG. 1 is a flow diagram illustrating a method of scene recognition in accordance with an exemplary embodiment; the scene recognition method can be applied to a camera or an electronic device (such as a smart phone and a tablet computer) including a camera device, as shown in fig. 1, and includes the following steps:
in step 101, an image in a preview interface of an image acquisition component is acquired.
In an embodiment, the image capturing component in the present disclosure needs to have an artificial intelligence scene recognition function, and a user may start the artificial intelligence scene recognition function through a touch screen, a physical button, or a menu command.
In step 102, feature extraction and recognition are performed on the image to obtain a setting identifier.
In an embodiment, if the user does not perform a focusing operation on the image in the preview interface, the global feature extraction may be directly performed on the image, and a specific implementation manner of performing the feature extraction and recognition on the image may refer to the solutions of the related art, which is not described in detail herein.
In an embodiment, if a user performs a focusing operation on a certain position of an image in the preview interface, feature extraction and recognition may be performed only on an image area where the focusing operation is performed.
In one embodiment, the setting identifier may be a brand logo, a hospital identifier, a traffic identifier, or other common identifiers or icons.
In step 103, the scene to which the object corresponding to the setting identifier belongs and the object service information are obtained in the artificial intelligence scene recognition library.
In one embodiment, the scene to which the object belongs may be a landscape, a food, a service place, and the like.
In one embodiment, the object service information may include descriptive information corresponding to the identified identifier, such as which brand the identifier corresponds to, introduction information of the brand, and the like; the object service information may further include Positioning information identifying a nearby store of a corresponding brand, such as Global Positioning System (GPS) Positioning information, and path information identifying a nearby store of a corresponding brand to indicate how the user reaches the nearby store of the brand, and may further include traffic indication information, such as taxi-taking billing information, public traffic information, and the like, when the store is not nearby.
In an embodiment, the object service information may be service information that can be provided for the user as will be appreciated by those skilled in the art, for example, the object service information may further include replaceable information, that is, information of the same type of commodity that can replace the object, for example, if the object is a coffee a brand, then the object service information may provide coffee of a coffee B brand and related information of the coffee B brand (such as store address information) of the same price or the same taste at the same time.
In one embodiment, the object service information may also include other information capable of providing life service for the user, such as commodity price, and will not be described in detail herein
In an embodiment, the artificial intelligence scene recognition library may record a corresponding relationship between various common identifiers or icons such as a brand logo, a hospital identifier, a traffic identifier, and the like, and a scene to which an object belongs, for example, a corresponding relationship between a red cross identifier and a hospital scene; the corresponding relation between various common marks or icons and object service information can be recorded, for example, when a red cross mark is recognized from an image, the device can acquire hospital introduction information of a nearby hospital, positioning information and path information of the nearby hospital and the like through an artificial intelligent scene recognition library, and when a user is not familiar with nearby road information, the user can quickly reach a desired hospital through the positioning information and the path information.
In one embodiment, the artificial intelligence scene recognition library can be obtained through mass statistics of actual use data of users or through a large amount of statistical investigation; in an embodiment, the artificial intelligence scene recognition library can be obtained by an electronic device provider through cloud search by adding an algorithm.
In step 104, the scene to which the object belongs and the object service information are output.
In an embodiment, the scene to which the object belongs may be output by way of an interface display, that is, in a preview interface of the image acquisition component, an "AI" identifier is displayed as the identified scene category identifier.
In an embodiment, the object service information may be output in an interface display manner, for example, a pop-up box is displayed in a preview interface of the image capturing component, and the pop-up box describes the object service information in a text description manner; in yet another embodiment, the object service information may be output acoustically, for example, if the device recognizes a toilet identification, a voice may be output "nearest toilet in the vicinity is on the way, may arrive 100 meters straight"; in yet another embodiment, the object service information can be output in a manner of interface display and sound at the same time; in one embodiment, the object service information may also be output in other ways that may be imagined by one skilled in the art.
In one embodiment, the device may provide a plurality of ways to output the object service information, and the user may select one of the plurality of ways to output the object service information, for example, if the user is currently at an office, the user may select to output the object service information in a way displayed through the device interface, and if the user is currently traveling on a road, the user may select to output the object service information in a way of sound.
In this embodiment, when the image acquisition component is used, feature extraction and recognition may be directly performed on an image in the preview interface, and when the setting identifier is recognized, the scene to which the object corresponding to the setting identifier belongs and the object service information may be matched from the AI scene recognition library, and the scene to which the object belongs and the object service information may be output. When AI scene recognition is carried out, corresponding scenes where objects belong can be obtained based on set marks in the image, such as commodity trademark logo, toilet marks, traffic marks and the like, the range and accuracy of intelligent scene recognition can be expanded, and user experience can be improved by outputting object service information, such as popup commodity brand introduction, commodity store address information and the like.
In one embodiment, the extracting and recognizing features of the image to obtain the setting identifier includes:
receiving focusing operation triggered by a user in an image;
determining a target object to be recognized based on the focusing operation;
and performing feature extraction and recognition on the target object in the image to obtain a set identifier.
In an embodiment, acquiring scene to which an object corresponding to an identifier belongs and object service information from an artificial intelligence scene recognition library includes:
inquiring the scene to which the object corresponding to the identifier belongs and the object service information in a local artificial intelligence scene recognition library; alternatively, the first and second electrodes may be,
and inquiring the scene to which the object corresponding to the identifier belongs and the object service information in an artificial intelligent scene recognition library in the cloud server.
In an embodiment, the method further comprises:
receiving an operation triggered by a user aiming at the selection of the object service information;
if the operation indicates that the user does not need to check the object service information, quitting the display interface of the object service information;
if the operation indicates that the user wants to obtain the detailed information corresponding to the object service information, obtaining the detailed information corresponding to the object service information;
and outputting the detail information.
In one embodiment, the object service information includes description information, positioning information, and path information of the target object.
Please refer to the following embodiments for details of how to perform scene recognition.
The technical solutions provided by the embodiments of the present disclosure are described below with specific embodiments.
FIG. 2 is a flow diagram illustrating a method of scene recognition in accordance with an exemplary embodiment; the present embodiment utilizes the above method provided by the embodiment of the present disclosure to exemplarily explain that the target object is recognized through the focusing operation triggered by the user in the preview interface, as shown in fig. 2, the method includes the following steps:
in step 201, an image in a preview interface of an image acquisition component is acquired.
In step 202, a focusing operation triggered by a user in an image is received.
In step 203, a target object to be recognized is determined based on the focusing operation.
In an embodiment, when a focusing operation triggered by clicking in a preview interface by a user is received, an object in an image area corresponding to the focusing operation can be automatically determined as a target object to be identified.
In an embodiment, when the focusing operation triggered by clicking in the preview interface by the user is not received, objects which can be identified to the set identification in the whole image can be determined as target objects.
In step 204, feature extraction and recognition are performed on the target object in the image to obtain a set identifier.
In an embodiment, if a user performs a focusing operation on a certain position of an image in the preview interface, feature extraction and recognition may be performed only on an image area where the focusing operation is performed.
In an embodiment, if the user does not perform a focusing operation on the image in the preview interface, the global feature extraction may be directly performed on the image, and a specific implementation manner of performing the feature extraction and recognition on the image may refer to the solutions of the related art, which is not described in detail herein.
In step 205, the scene to which the object corresponding to the setting identifier belongs and the object service information are obtained in the artificial intelligence scene recognition library.
In step 206, the scene to which the object belongs and the object service information are output.
In this embodiment, the focusing operation is triggered on the target object in the image, so that the setting identifier corresponding to the target object can be identified, and then only the object service information and the scene type that the user wants to know are obtained, so that the scene identification is more intelligent.
Fig. 3 is a flowchart of a scene recognition method according to an exemplary embodiment, where this embodiment uses the method provided in the embodiment of the present disclosure to exemplarily explain how to obtain object service information, as shown in fig. 3, the method includes the following steps:
in step 301, an image in a preview interface of an image capture component is acquired.
In step 302, feature extraction and recognition are performed on the image to obtain a setting identifier.
In step 303, the scene to which the object corresponding to the setting identifier belongs and the object service information are obtained in the artificial intelligence scene recognition library.
In step 304, the scene to which the object belongs and the object service information are output.
In step 305, receiving an operation triggered by the user selection of the object service information, and executing step 306 or step 307.
In step 306, if the operation indicates that the user does not need to view the object service information, the user exits from the display interface of the object service information.
In step 307, if the operation instruction user wants to acquire the detailed information corresponding to the object service information, the detailed information corresponding to the object service information is acquired, and step 308 is executed.
In an embodiment, if an operation triggered by a user for selecting object service information is received, detailed information corresponding to the object service information may be further obtained, for example, if the identified object is a brand product, the object service information may only display commodity store address information of the brand product, and may also display commodity information that may be replaced by the same type of the brand product, for example, if the object is a coffee a brand, related information (such as store address information) of coffee of a coffee B brand and coffee of the same price or taste may be provided in the object service information at the same time, and the operating time, discount information, and the like of the current commodity store may be further prompted to the user through the detailed information; if the identified object is a hospital, the object service information only displays the description information, the positioning information and the path information of the hospital, and then registration links of nearby hospitals, whether registration of each department is available or not, information of current experts of each department and the like can be further prompted to the user through the detail information.
In step 308, the detail information is output.
In one embodiment, the detailed information corresponding to the object service information may be displayed through a pop-up page.
In this embodiment, when an operation triggered by the user on the object service information is received, the detail information of the target object may be output to the user, for example, if the target object is a brand product and the object service information only shows the product store address information of the brand product, the business time, discount information, and the like of the current product store may be further prompted to the user through the detail information, so that the functions of providing life services for the user from multiple aspects and expanding intelligent scene recognition are realized.
Fig. 4 is a block diagram illustrating a scene recognition apparatus according to an exemplary embodiment, and as shown in fig. 4, the scene recognition apparatus includes:
a first obtaining module 410 configured to obtain an image in a preview interface of an image acquisition component;
an image recognition module 420 configured to perform feature extraction and recognition on the image acquired by the first acquisition module 410 to obtain a setting identifier;
a second obtaining module 430, configured to obtain, in the artificial intelligence scene recognition library, a scene to which an object corresponding to the setting identifier identified by the image recognition module 420 belongs and object service information;
and a first output module 440 configured to output the scene to which the object acquired by the second acquisition module 430 belongs and the object service information.
Fig. 5 is a block diagram illustrating another scene recognition apparatus according to an exemplary embodiment, as shown in fig. 5, and based on the embodiment shown in fig. 4, in an embodiment, the image recognition module 420 includes:
a receiving sub-module 4201 configured to receive a focusing operation triggered by a user in an image;
a determination submodule 4202 configured to determine a target object to be recognized based on the focusing operation received by the reception submodule;
the recognition submodule 4203 is configured to perform feature extraction and recognition on the target object in the image, so as to obtain a set identifier.
In one embodiment, the second obtaining module 430 includes:
the first obtaining submodule 4301 is configured to query a scene to which an object corresponding to an identifier belongs and object service information in a local artificial intelligence scene recognition library; alternatively, the first and second electrodes may be,
the second obtaining submodule 4302 is configured to query, in an artificial intelligence scene recognition library in the cloud server, a scene to which the object belongs and object service information corresponding to the identifier.
In an embodiment, the apparatus further comprises:
a receiving module 450 configured to receive an operation triggered by a user selection for object service information;
an exit module 480 configured to exit the display interface of the object service information if the operation received by the receiving module 450 indicates that the user does not need to view the object service information;
a third obtaining module 460 configured to obtain detailed information corresponding to the object service information based on the operation instruction user received by the receiving module 450 to obtain the detailed information corresponding to the object service information;
a second output module 470 configured to output the detail information acquired by the third acquisition module 460.
In one embodiment, the object service information includes one or a combination of any of the following: description information, positioning information, path information, alternative information of the object.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
As for the device embodiment, since it basically corresponds to the method embodiment, the specific manner of executing operations by each module has been described in detail in the embodiment related to the method, and it will not be elaborated herein that some or all of the modules may be selected according to actual needs to achieve the purpose of the present disclosure. One of ordinary skill in the art can understand and implement it without inventive effort.
Fig. 6 is a block diagram illustrating a suitable scene recognition apparatus according to an example embodiment. For example, the apparatus 600 may be a video camera or other electronic device that includes a camera.
Referring to fig. 6, apparatus 600 may include one or more of the following components: processing component 602, memory 604, power component 606, multimedia component 608, audio component 610, input/output (I/O) interface 612, sensor component 614, and communication component 616.
The processing component 602 generally controls overall operation of the device 600, such as operations associated with display, voice playback, data communication, and recording operations. The processing elements 602 may include one or more processors 620 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components. For example, the processing component 602 can include a multimedia module to facilitate interaction between the multimedia component 608 and the processing component 602.
The memory 604 is configured to store various types of data to support operation at the device 600. Examples of such data include instructions, messages, images, etc. for any application or method operating on device 600. The memory 604 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The multimedia component 608 includes a screen that provides an output interface between the device 600 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The audio component 610 is configured to output and/or input audio signals. For example, audio component 610 includes a Microphone (MIC) configured to receive external audio signals when apparatus 600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 604 or transmitted via the communication component 616. In some embodiments, audio component 610 further includes a speaker for outputting audio signals.
The I/O interface 612 provides an interface between the processing component 602 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 614 includes one or more sensors for providing status assessment of various aspects of the apparatus 600. For example, the sensor component 614 may detect an open/closed state of the device 600, the relative positioning of the components, such as a display and keypad of the apparatus 600, the sensor component 614 may also detect a change in position of the apparatus 600 or a component of the apparatus 600, the presence or absence of user contact with the apparatus 600, orientation or acceleration/deceleration of the apparatus 600, and a change in temperature of the apparatus 600. The sensor assembly 614 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a distance sensor, a pressure sensor, or a temperature sensor.
The communication component 616 is configured to facilitate communications between the apparatus 600 and other devices in a wired or wireless manner. Device 600 may access a wireless network based on a communication standard, such as WIFI, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 616 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the following methods:
acquiring an image in a preview interface of an image acquisition component;
carrying out feature extraction and recognition on the image to obtain a set identifier;
acquiring a scene to which an object corresponding to a set identifier belongs and object service information from an artificial intelligence scene recognition library;
and outputting the scene to which the object belongs and the object service information.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 604 comprising instructions, executable by the processor 620 of the apparatus 600 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (11)
1. A method for scene recognition, the method comprising:
acquiring an image in a preview interface of an image acquisition component;
extracting and identifying the features of the image to obtain a set identifier;
acquiring the scene to which the object corresponding to the set identifier belongs and object service information from an artificial intelligence scene recognition library;
and outputting the scene to which the object belongs and the object service information.
2. The method according to claim 1, wherein the extracting and recognizing the features of the image to obtain a setting identifier comprises:
receiving focusing operation triggered by a user in the image;
determining a target object to be recognized based on the focusing operation;
and performing feature extraction and recognition on the target object in the image to obtain the set identifier.
3. The method according to claim 1, wherein the obtaining of the scene to which the object corresponding to the identifier belongs and the object service information in the artificial intelligence scene recognition library comprises:
inquiring the scene to which the object corresponding to the identifier belongs and the object service information in a local artificial intelligence scene recognition library; alternatively, the first and second electrodes may be,
and inquiring the scene to which the object corresponding to the identifier belongs and the object service information in an artificial intelligent scene recognition library in the cloud server.
4. The method of claim 1, further comprising:
receiving an operation triggered by the user aiming at the object service information selection;
if the operation indicates that the user does not need to check the object service information, quitting the display interface of the object service information;
if the operation indicates that the user wants to acquire the detailed information corresponding to the object service information, acquiring the detailed information corresponding to the object service information;
and outputting the detail information.
5. The method of claim 1, wherein the object service information comprises one or any combination of the following information: description information, positioning information, path information, alternative information of the object.
6. A scene recognition apparatus, characterized in that the apparatus comprises:
the first acquisition module is configured to acquire an image in a preview interface of the image acquisition component;
the image identification module is configured to extract and identify the features of the image acquired by the first acquisition module to obtain a set identifier;
the second acquisition module is configured to acquire the scene to which the object corresponding to the set identifier and the object service information, which are acquired by the image recognition module, from the artificial intelligence scene recognition library;
and the first output module is configured to output the scene to which the object belongs and the object service information acquired by the second acquisition module.
7. The apparatus of claim 6, wherein the image recognition module comprises:
a receiving sub-module configured to receive a focusing operation triggered by a user in the image;
a determination submodule configured to determine a target object to be identified based on the focusing operation received by the reception submodule;
and the recognition submodule is configured to extract and recognize the features of the target object in the image to obtain the set identification.
8. The apparatus of claim 6, wherein the second obtaining module comprises:
the first acquisition sub-module is configured to query the scene to which the object corresponding to the identifier belongs and the object service information in a local artificial intelligence scene recognition library; alternatively, the first and second electrodes may be,
and the second acquisition submodule is configured to query the scene to which the object corresponding to the identifier belongs and the object service information in an artificial intelligence scene recognition library in the cloud server.
9. The apparatus of claim 6, further comprising:
a receiving module configured to receive an operation triggered by a user selection for the object service information;
the quitting module is configured to quit the display interface of the object service information if the operation received by the receiving module indicates that the user does not need to view the object service information;
a third obtaining module, configured to obtain the detailed information corresponding to the object service information if the operation received by the receiving module indicates that the user wants to obtain the detailed information corresponding to the object service information;
a second output module configured to output the detail information acquired by the third acquisition module.
10. The apparatus of claim 6, wherein the object service information comprises one or any combination of the following information: description information, positioning information, path information, alternative information of the object.
11. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring an image in a preview interface of an image acquisition component;
extracting and identifying the features of the image to obtain a set identifier;
acquiring the scene to which the object corresponding to the set identifier belongs and object service information from an artificial intelligence scene recognition library;
and outputting the scene to which the object belongs and the object service information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910117348.1A CN111582002A (en) | 2019-02-15 | 2019-02-15 | Scene recognition method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910117348.1A CN111582002A (en) | 2019-02-15 | 2019-02-15 | Scene recognition method and device and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111582002A true CN111582002A (en) | 2020-08-25 |
Family
ID=72110798
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910117348.1A Pending CN111582002A (en) | 2019-02-15 | 2019-02-15 | Scene recognition method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111582002A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130259448A1 (en) * | 2008-10-03 | 2013-10-03 | 3M Innovative Properties Company | Systems and methods for optimizing a scene |
CN107358233A (en) * | 2017-06-30 | 2017-11-17 | 北京小米移动软件有限公司 | Information acquisition method and device |
CN107766432A (en) * | 2017-09-18 | 2018-03-06 | 维沃移动通信有限公司 | A kind of data interactive method, mobile terminal and server |
US20180239989A1 (en) * | 2017-02-20 | 2018-08-23 | Alibaba Group Holding Limited | Type Prediction Method, Apparatus and Electronic Device for Recognizing an Object in an Image |
-
2019
- 2019-02-15 CN CN201910117348.1A patent/CN111582002A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130259448A1 (en) * | 2008-10-03 | 2013-10-03 | 3M Innovative Properties Company | Systems and methods for optimizing a scene |
US20180239989A1 (en) * | 2017-02-20 | 2018-08-23 | Alibaba Group Holding Limited | Type Prediction Method, Apparatus and Electronic Device for Recognizing an Object in an Image |
CN107358233A (en) * | 2017-06-30 | 2017-11-17 | 北京小米移动软件有限公司 | Information acquisition method and device |
CN107766432A (en) * | 2017-09-18 | 2018-03-06 | 维沃移动通信有限公司 | A kind of data interactive method, mobile terminal and server |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107818180B (en) | Video association method, video display device and storage medium | |
CN104219785B (en) | Real-time video providing method, device and server, terminal device | |
CN105845124B (en) | Audio processing method and device | |
CN104506715A (en) | Method and device for displaying notification messages | |
CN109064277B (en) | Commodity display method and device | |
CN113382308B (en) | Information display method and device, electronic equipment and computer readable storage medium | |
CN107423386B (en) | Method and device for generating electronic card | |
CN104731880A (en) | Image ordering method and device | |
US9924334B1 (en) | Message pushing method, terminal equipment and computer-readable storage medium | |
US20170171706A1 (en) | Device displaying method, apparatus, and storage medium | |
CN105654533A (en) | Picture editing method and picture editing device | |
US9628966B2 (en) | Method and device for sending message | |
CN105549300A (en) | Automatic focusing method and device | |
CN106453528A (en) | Method and device for pushing message | |
CN108011990B (en) | Contact management method and device | |
CN107493366B (en) | Address book information updating method and device and storage medium | |
CN110046019A (en) | A kind of chat message lookup method, device and electronic equipment | |
CN105912202A (en) | Application sharing method and device | |
CN106506808B (en) | Method and device for prompting communication message | |
CN105740356B (en) | Method and device for marking target audio | |
CN113032627A (en) | Video classification method and device, storage medium and terminal equipment | |
CN104539497A (en) | Network connecting method and device | |
CN108491535B (en) | Information classified storage method and device | |
CN106372620B (en) | Video information sharing method and device | |
CN111582002A (en) | Scene recognition method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200825 |