CN107609452A - Processing method and processing device - Google Patents

Processing method and processing device Download PDF

Info

Publication number
CN107609452A
CN107609452A CN201710907874.9A CN201710907874A CN107609452A CN 107609452 A CN107609452 A CN 107609452A CN 201710907874 A CN201710907874 A CN 201710907874A CN 107609452 A CN107609452 A CN 107609452A
Authority
CN
China
Prior art keywords
trigger
pattern
target
trigger pattern
patterns
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710907874.9A
Other languages
Chinese (zh)
Inventor
邝宇豪
贺钢
王帮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201710907874.9A priority Critical patent/CN107609452A/en
Publication of CN107609452A publication Critical patent/CN107609452A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the present application discloses a kind of processing method and processing device, after the image comprising at least two triggering patterns is obtained, determine that target triggering pattern is identified from above-mentioned at least two pattern based on preset mode, i.e., trigger the identification of pattern in the case of existing concurrently with multiple triggering patterns, provide it is a kind of it is new mode is identified to triggering pattern, improve the convenience that triggering pattern uses.

Description

Processing method and device
Technical Field
The present application relates to the field of information technology, and more particularly, to a processing method and apparatus.
Background
With the development of information technology, the use of recognizable trigger patterns, such as two-dimensional codes, bar codes and the like, is very extensive, for example, scanning two-dimensional codes for payment, scanning two-dimensional codes to pay attention to platform accounts, scanning two-dimensional codes to view business cards and the like.
In actual use, a situation that a plurality of recognizable trigger patterns are placed together often occurs, for example, a two-dimensional code for payment of a treasure and a two-dimensional code for WeChat payment are placed in a merchant physical store. When the two-dimensional codes are placed together, a user needs to determine one two-dimensional code from the two-dimensional codes as a scanning target, and then the distance between the scanning device and the two-dimensional codes is adjusted, so that the scanning target enters a scanning range, and the scanning target is identified. However, the inventor researches and finds that only one identification mode is available for the recognizable trigger patterns, and the recognizable trigger patterns are single in identification mode and poor in convenience.
Disclosure of Invention
The application provides the following technical scheme:
a method of processing, comprising:
obtaining an image;
identifying the image, and if the image comprises at least two trigger patterns, determining a target trigger pattern based on a preset mode; the trigger pattern is provided with trigger information, and the trigger information can be used for triggering and displaying an interface corresponding to the trigger information;
identifying the target trigger pattern.
According to the processing method, after the image containing at least two trigger patterns is obtained, the target trigger pattern is determined from the at least two patterns based on a preset mode to be identified, namely, the trigger pattern is identified under the condition that a plurality of trigger patterns exist simultaneously, a novel trigger pattern identification mode is provided, and the use convenience of the trigger pattern is improved.
Preferably, the determining the target trigger pattern based on the preset mode includes:
detecting a first input operation;
in response to the first input operation, a trigger pattern is selected from the at least two trigger patterns as a target trigger pattern.
According to the processing method, the target trigger pattern is manually selected from the multiple trigger patterns, a user only needs to scan all the patterns into a scanning range and then manually select one trigger pattern for identification, the times of adjusting the distance between the scanning equipment and the trigger pattern by the user are reduced, the operation process is less, and the probability of false identification is reduced.
Preferably, the determining the target trigger pattern based on the preset mode includes:
and selecting a first trigger pattern from the at least two trigger patterns as a target trigger pattern according to a selection strategy.
The above method, preferably, further comprises:
detecting a second input operation;
responding to the second input operation, and triggering to enter a re-identified interface;
and selecting a second trigger pattern different from the first trigger pattern as a target trigger pattern according to the selection strategy.
According to the processing method, different trigger patterns are selected for identification every time, and unnecessary repeated operation brought to a user by false identification is avoided.
Preferably, the determining the target trigger pattern based on the preset mode includes: all trigger patterns in the image are taken as target trigger patterns; the method further comprises the following steps:
displaying interfaces corresponding to the recognition results of all the trigger patterns;
and detecting a third input operation, and responding to the third input operation to select an interface as a final display interface.
According to the processing method, all the trigger patterns are used as target trigger patterns to be recognized, the interface corresponding to the recognition result is displayed to a user, and the user selects one interface to be used as a final interface to be displayed. The selection mode is more intuitive, the convenience of the application of the trigger pattern is improved, and the problem of high false recognition rate of the trigger pattern is solved.
The above method, preferably, further comprises:
acquiring identification information;
displaying the target trigger pattern and the identification information in association to distinguish the target trigger pattern from non-target trigger patterns in the image.
The above method, preferably, displaying the target trigger pattern and the identification information in a linked manner, includes:
determining a preset area with the target trigger pattern as a center;
and displaying the identification information on the content in the preset area in an overlapping manner.
In the above method, preferably, the target trigger pattern and the identification information are displayed in a linked manner, and the target trigger pattern is recognized; or after the target trigger pattern and the identification information are displayed in a connected manner, the target trigger pattern is identified.
The processing method displays the target trigger pattern and the non-target trigger pattern in a distinguishing mode. The identification information guides the user to select the correct trigger pattern, so that the probability of misidentification of the trigger pattern is reduced.
A processing apparatus, comprising: a processor, and a memory communicatively coupled with the processor;
the processor is used for obtaining an image; identifying the image, and if the image comprises at least two trigger patterns, determining a target trigger pattern based on a preset mode; identifying the target trigger pattern; the trigger pattern is provided with trigger information, and the trigger information can be used for triggering and displaying an interface corresponding to the trigger information.
A processing apparatus, comprising:
the acquisition module is used for acquiring an image;
the determining module is used for identifying the image, and if the image comprises at least two trigger patterns, determining a target trigger pattern based on a preset mode; the trigger pattern is provided with trigger information, and the trigger information can be used for triggering and displaying an interface corresponding to the trigger information;
and the identification module is used for identifying the target trigger pattern.
According to the processing device, after the image containing at least two trigger patterns is obtained, the target trigger pattern is determined from the at least two trigger patterns and is identified based on the preset mode, namely, the trigger patterns are identified under the condition that a plurality of trigger patterns exist at the same time, a novel mode for identifying the trigger patterns is provided, and the convenience in using the trigger patterns is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of an implementation of a processing method provided in an embodiment of the present application;
fig. 2 is a diagram illustrating an example of an image including a plurality of two-dimensional codes according to an embodiment of the present disclosure;
FIG. 3 is a flow chart of an implementation of determining a target trigger pattern based on a preset mode according to an embodiment of the present application;
fig. 4 is an exemplary diagram illustrating a process of superimposing identification information on content displayed in a predetermined area according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a processing apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of another processing apparatus according to an embodiment of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be practiced otherwise than as specifically illustrated.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
The processing method and device provided by the embodiment of the application can be applied to electronic equipment, and the electronic equipment is provided with a display output module and can output information to be displayed (such as characters, and/or symbols, and/or pictures, and/or digital information and the like) to a display device for displaying. The display device can be a display screen connected with the display output module in the electronic equipment, and can also be display equipment mutually independent from the electronic equipment. The display device can also be a projection bearing surface for bearing a projection picture, and at the moment, the display output module is a projection module. The electronic device may also recognize a trigger pattern.
Referring to fig. 1, fig. 1 is a flowchart of an implementation of a processing method according to an embodiment of the present application, which may include:
step S11: an image is obtained.
The image may be an image acquired by an image acquisition device of the electronic device, an image locally stored by the electronic device, or an image acquired by the electronic device from the internet.
Step S12: identifying the obtained image, and if the image comprises at least two trigger patterns, determining a target trigger pattern based on a preset mode; the trigger pattern is provided with trigger information, and the trigger information can be used for triggering and displaying an interface corresponding to the trigger information.
The trigger pattern can be a two-dimensional code, and the trigger information of the two-dimensional code refers to black and white patterns distributed on a plane according to a certain rule in the two-dimensional code, namely in a two-dimensional direction. Or,
the trigger pattern may be a bar code, and the trigger information included in the bar code includes a plurality of black bars and spaces having different widths arranged according to a predetermined coding rule.
Of course, the trigger pattern is not limited to the above two expressions, and may be a recognizable image of other expressions, such as a recognizable human face, an object, and the like. As long as the trigger pattern has trigger information, the trigger information can be used to trigger display of an interface corresponding to the trigger information. Wherein, the trigger pattern has trigger information that can be: the trigger pattern itself contains trigger information. The trigger pattern with trigger information may also be: the trigger pattern is associated with the trigger information, for example, after a human face is recognized, the trigger information associated with the recognized human face is acquired through the network server, and the trigger information is used for triggering an interface corresponding to the trigger information.
After obtaining the image, firstly, whether the image contains the touch pattern or not is identified, and if the image contains the touch pattern, the number of the contained touch patterns is identified.
If the image comprises two or more than two trigger patterns, determining at least part of the trigger patterns as target trigger patterns in the trigger patterns by a preset mode. The preset mode can be a manual selection mode or an automatic selection mode.
As shown in fig. 2, which is an exemplary diagram of an image including a plurality of two-dimensional codes.
Step S13: a target trigger pattern is identified.
After the target trigger pattern is determined, the target trigger pattern is identified. If only one target trigger pattern is available, only the target trigger pattern is identified, and if at least two target trigger patterns are available, the at least two target trigger patterns are identified.
According to the processing method provided by the embodiment of the application, after the image containing at least two trigger patterns is obtained, the target trigger pattern is determined from the at least two patterns based on a preset mode to be identified, namely, the trigger pattern is identified under the condition that a plurality of trigger patterns exist at the same time, a novel trigger pattern identification mode is provided, and the use convenience of the trigger patterns is improved.
Optionally, an implementation flowchart of determining a target trigger pattern based on a preset mode provided in an embodiment of the present application is shown in fig. 3, and may include:
step S31: a first input operation is detected.
After the image is obtained, a selection operation may be performed by the user through an operation body, such as a finger, a stylus, or the like, and the selection operation may be a single-click operation, a double-click operation, a frame selection operation, or the like of the trigger pattern to be selected.
Step S32: in response to a first input operation, a trigger pattern is selected from at least two trigger patterns as a target trigger pattern.
In an alternative embodiment, to increase the speed of determining the target trigger pattern, a voice input or a line-of-sight focus of the user may be detected, and the target trigger pattern is determined based on the voice input or the line-of-sight focus.
In an alternative embodiment, one implementation of determining the target trigger pattern based on the voice input may be:
when the voice input of the user is detected, acquiring the position relation between the at least two trigger patterns; the positional relationship may be recognized when at least two trigger patterns are recognized in the image, or when the user voice input is detected.
Recognizing the input voice to obtain the position information carried in the voice;
for example, the speech input by the user may be: "upper left", or "rightmost first row", or "middle second row", etc.
And determining the trigger pattern positioned based on the identified position information from the at least two trigger patterns as a target trigger pattern based on the identified position information.
In another alternative embodiment, one implementation of determining the target trigger pattern based on the voice input may be:
when a user speech input is detected, the identity of the respective trigger pattern is recognized.
Taking the two-dimensional code as an example, the identifier of the two-dimensional code may be a pattern in the center of the two-dimensional code, for example, in fig. 2, the identifier of the two-dimensional code at the lower right corner is a pattern of the "pan" word in the center of the two-dimensional code. The identification of the two-dimensional code at the lower left corner in fig. 2 is the pattern of the two-dimensional code center WeChat icon.
Recognizing the input voice to obtain identification information carried in the voice;
and determining the trigger pattern with the identified identification information as a target trigger pattern from the at least two trigger patterns based on the identified identification information.
In the embodiment of the application, a trigger pattern is manually selected from a plurality of trigger patterns by a user as a target trigger pattern, and compared with the prior art in which the target trigger pattern enters a scanning range by adjusting the distance between an electronic device and the trigger pattern, in the prior art, one trigger pattern needs to be selected from a plurality of trigger patterns to enter the scanning range, so that the user needs to adjust the distance between the scanning device and the trigger pattern many times, the operation process is more, and a situation of false recognition is likely to occur, for example, a non-scanning target is mistakenly placed in the scanning range as a scanning target. And the mode of manually selecting the target trigger pattern from the plurality of trigger patterns is that a user only needs to scan all patterns into a scanning range and then manually select one trigger pattern for identification, so that the times of adjusting the distance between scanning equipment and the trigger pattern by the user are reduced, the operation process is less, and the probability of false identification is reduced.
The processing method provided by the application can be applied to an independent application or function interface, namely, the independent application or function interface only has the function of identifying the trigger pattern.
In an optional embodiment, the processing method provided by the present application may be applied to a first application, where the first application may acquire a preview image through an image acquisition unit and display the acquired preview image on a display unit, where the preview image is displayed in a full screen or almost in a full screen. The first application is capable of at least recognizing the captured preview image and storing the captured preview image in accordance with a triggering instruction. Based on this, one implementation manner of the processing method provided by the present application may be:
acquiring an image through an image acquisition unit, and displaying the image in a preview mode;
identifying the acquired image;
if the image comprises at least two trigger patterns, determining a target trigger pattern based on a preset mode; the trigger pattern is provided with trigger information, and the trigger information can be used for triggering and displaying an interface corresponding to the trigger information;
a target trigger pattern is identified.
In an optional embodiment, the processing method may further include:
and detecting a fourth input operation, and responding to the fourth input operation and storing the acquired image.
In an optional embodiment, the processing method may further include:
if the image comprises at least two trigger patterns, outputting prompt information, wherein the prompt information is used for prompting a user that the trigger patterns can be identified;
and detecting a fifth input operation, determining a target trigger pattern based on a preset mode in response to the fifth input operation, and identifying the target trigger pattern, or displaying an identification result for identifying the trigger pattern in response to the fifth input operation. That is, after the image is recognized to include at least two trigger patterns, only the prompt information is output, the trigger patterns are not recognized, and the trigger patterns are recognized only when the user triggers the recognition; after the image is recognized to comprise at least two trigger patterns, the trigger patterns can also be recognized, prompt information is output, and after the fifth input operation is detected, the recognition result is directly output.
Optionally, another implementation manner for determining the target trigger pattern based on the preset manner provided in the embodiment of the present application may be:
according to a selection strategy, a first trigger pattern is selected from at least two trigger patterns as a target trigger pattern.
The selection strategy may be a random selection, i.e. a random selection of a trigger pattern from at least two trigger patterns as the target trigger pattern.
The selection strategy may also be to select the trigger patterns according to their sizes, for example, to select a trigger pattern as the target trigger pattern in the order from small to large or from large to small.
The selection strategy may also be to select a trigger pattern from at least two trigger patterns as a target trigger pattern according to a predetermined traversal order. For example, if the at least two trigger patterns are arranged in a row, one trigger pattern may be selected as the target trigger pattern in a left-to-right sequence or a right-to-left sequence; for another example, if the at least two trigger patterns are arranged in a row, one trigger pattern may be selected as the target trigger pattern in the order from top to bottom or from bottom to top. For another example, if the at least two trigger patterns are arranged in rows and columns, as shown in fig. 2, the selection may be performed row by row or column by column, and when the selection is performed in each row or each column, the foregoing manner may be referred to, and details are not repeated here.
Unlike the embodiment shown in fig. 3, in the present embodiment, one trigger pattern is selected as a target trigger pattern from at least two trigger patterns in an automatic selection manner.
Further, the processing method provided by the embodiment of the present application may further include:
a second input operation is detected.
And responding to the second input operation, and triggering to enter the re-identified interface.
After determining the target trigger pattern, if the user finds that the determined target trigger pattern is not the trigger pattern to be recognized, the electronic device may be operated to return to the recognized interface for re-recognition. The re-identified interface may be a display interface of the at least two trigger patterns. Such as the interface shown in fig. 2. The re-identified interface may be an interface displayed by capturing an image in real time, or the re-identified interface may be an image of the displayed interface cached in the last identification process. By using the cached image of the display interface, the image acquisition unit can be closed, so that the power consumption of the electronic equipment is saved
In the embodiment of the application, the user can operate the electronic device to return the identified interface for re-identification at any time after the target trigger pattern is determined. For example, the user may operate the electronic device to return to the identified interface for re-identification within a short time (e.g., within 5 seconds, or even within 3 seconds) after determining the target trigger pattern. For another example, the electronic device may be operated to return to the recognition interface during the process of recognizing the target trigger pattern, in this example, the recognized content may be displayed in real time, that is, a part of the content is recognized, and the part of the recognized content is displayed instead of all the recognized content being displayed together, so that the user may immediately determine whether the recognized trigger pattern is the correct trigger pattern according to the content displayed in real time. For another example, the electronic device may be operated to return to the recognition interface after the triggering of the pattern recognition on the target is complete. And displaying an interface corresponding to the recognition result after the target trigger pattern is recognized, and if the user finds that the target trigger pattern is not recognized, operating the electronic equipment to return the recognized interface for re-recognition.
According to the selection strategy, a second trigger pattern different from the first trigger pattern is selected as a target trigger pattern.
When returning to the recognition interface for re-recognition, selecting an unrecognized trigger pattern as a target trigger pattern. That is, the trigger pattern that has already been recognized is not repeatedly recognized. Unnecessary repetitive operations are avoided. For example, in the prior art, when the user exits the recognition result interface, the scanning function is usually turned off, and if the user wants to scan again, the scanning function needs to be turned on again.
Due to the re-scan, if there is no replacement strategy, it is possible to prioritize the identified or previously identified trigger pattern, whereas the previously identified trigger pattern has been proven by a user-triggered return operation to be in fact not the trigger pattern that the user intended to identify. Therefore, unrecognized trigger patterns need to be selected for recognition: for example, the next trigger pattern of the previously recognized trigger pattern is selected in order of the trigger patterns from large to small, or the next trigger pattern of the previously recognized trigger pattern is selected in order from left to right. Or randomly select a new trigger pattern from among the unselected trigger patterns, etc.
In an alternative embodiment, another implementation manner of determining the target trigger pattern based on the preset manner may be:
all trigger patterns in the image are taken as target trigger patterns.
Unlike the previous embodiment in which one trigger pattern is selected as the target trigger pattern, in the present embodiment, all trigger patterns are set as the target trigger patterns. I.e. all trigger patterns are identified.
The processing method provided by the embodiment of the application may further include:
and displaying interfaces corresponding to the recognition results of all the trigger patterns.
In the embodiment of the application, after all the trigger patterns are identified, the interfaces corresponding to the identification results of all the trigger patterns are displayed, that is, all the interfaces are displayed simultaneously. Or displaying the interfaces corresponding to the recognition result of the trigger pattern every time one trigger pattern is recognized, namely displaying all the interfaces one by one.
The interfaces can be displayed in one page or in multiple pages, that is, each page only displays part of the interfaces, and the user can view the interfaces in other pages through page switching.
And detecting a third input operation, and responding to the third input operation to select an interface as a final display interface.
The third input operation may be an operation performed by the user to select an interface after the interface corresponding to the recognition results of all the trigger patterns is displayed, or may be an operation performed by the user to select an interface if the user finds a correct interface during the process of displaying the interfaces one by one. After the user selects the interface, if the trigger pattern is not identified, the identification is not carried out, namely the identification process is terminated.
In the embodiment of the application, all the trigger patterns are used as target trigger patterns to be identified, the interface corresponding to the identification result is displayed to a user, and the user selects one interface to be used as a final interface to be displayed. The selection mode is more intuitive, the convenience of the application of the trigger pattern is improved, and the problem of high false recognition rate of the trigger pattern is solved.
Optionally, the processing method provided in the embodiment of the present invention may further include:
and acquiring identification information.
The target trigger pattern and the identification information are displayed in association to distinguish the target trigger pattern from non-target trigger patterns in the image.
In order to make the target trigger pattern more distinct from the non-target trigger pattern. The identification information may be identification information having a specific display effect. For example, the identification information may be dynamically displayed, may be bolded, or may be highlighted.
In the embodiment of the application, the target trigger pattern and the non-target trigger pattern are displayed in a distinguishing way. Thus, the user can clearly know which trigger pattern is being recognized, and if the recognized trigger pattern is found to be wrong, correction can be performed manually, for example, the correct trigger pattern is selected manually for recognition. The identification information guides the user to select the correct trigger pattern, so that the probability of misidentification of the trigger pattern is reduced.
In an alternative embodiment, one implementation of displaying the target trigger pattern and the identification information in a linked manner may be:
a preset area centered on the target trigger pattern is determined.
The preset area is a partial area in the image. The preset area at least includes the complete target trigger pattern, and the preset area may not include the non-target trigger pattern, or may include a part of the non-target trigger pattern but not include the complete non-target trigger pattern.
And displaying the identification information on the content in the preset area in an overlapping manner.
The identification information may be a frame, and when the identification information is superimposed on the content displayed in the preset area, the target trigger pattern is located in the frame.
As shown in fig. 4, the exemplary illustration is an example of the case where the identification information is superimposed on the content displayed in the preset area, in this example, the identification information is a bold frame.
In an alternative embodiment, the target trigger pattern may be identified while the target trigger pattern and the identification information are displayed in association.
Since the processing speed of the electronic device may be higher than the time when the user sees the identification, recognizing the target trigger pattern while displaying the target trigger pattern and the identification information in association, it is possible that the user has finished recognizing without having to find the identification information, and based on this,
in another alternative embodiment, the target trigger pattern may be identified after the target trigger pattern and the identification information are displayed in association. Therefore, a certain time can be reserved for searching the identification information for the user so as to correct the target trigger pattern immediately. For example, if the user finds that the target trigger pattern is not correct, the user may manually select a correct trigger pattern as the target trigger pattern, and after the user selects the correct target trigger pattern, the user triggers generation of an instruction for recognition of the selected target trigger pattern. Based on this, after the target trigger pattern and the identification information are displayed in a linked manner, after the target trigger pattern is recognized, the method may further include:
a first input operation is detected.
In response to a first input operation, a trigger pattern is selected from at least two trigger patterns as a target trigger pattern.
Corresponding to the method embodiment, an embodiment of the present application further provides a processing apparatus, as shown in fig. 5, a schematic structural diagram of the processing apparatus provided in the embodiment of the present application may include:
a processor 51, and a memory 52 communicatively coupled to the processor 51.
The processor 51 is configured to obtain an image; identifying the image, and if the image comprises at least two trigger patterns, determining a target trigger pattern based on a preset mode; identifying a target trigger pattern; the trigger pattern is provided with trigger information, and the trigger information can be used for triggering and displaying an interface corresponding to the trigger information.
The image acquired by the processor 51 may be an image acquired by an image acquisition device of the electronic device, an image stored locally in the electronic device, or an image acquired from the internet.
The trigger pattern can be a two-dimensional code, and the trigger information of the two-dimensional code refers to black and white patterns distributed on a plane according to a certain rule in the two-dimensional code, namely in a two-dimensional direction. Or,
the trigger pattern may be a bar code, and the trigger information included in the bar code includes a plurality of black bars and spaces having different widths arranged according to a predetermined coding rule.
Of course, the trigger pattern is not limited to the above two expressions, and may be a recognizable image of other expressions, such as a recognizable human face, an object, and the like. As long as the trigger pattern has trigger information, the trigger information can be used to trigger display of an interface corresponding to the trigger information. Wherein, the trigger pattern has trigger information that can be: the trigger pattern itself contains trigger information. The trigger pattern with trigger information may also be: the trigger pattern is associated with the trigger information, for example, after a human face is recognized, the trigger information associated with the recognized human face is acquired through the network server, and the trigger information is used for triggering an interface corresponding to the trigger information.
After obtaining the image, firstly, whether the image contains the touch pattern or not is identified, and if the image contains the touch pattern, the number of the contained touch patterns is identified.
If the image comprises two or more than two trigger patterns, determining at least part of the trigger patterns as target trigger patterns in the trigger patterns by a preset mode. The preset mode can be a manual selection mode or an automatic selection mode.
After the target trigger pattern is determined, the target trigger pattern is identified. If only one target trigger pattern is available, only the target trigger pattern is identified, and if at least two target trigger patterns are available, the at least two target trigger patterns are identified.
According to the processing device provided by the embodiment of the application, after the image containing at least two trigger patterns is obtained, the target trigger pattern is determined from the at least two patterns based on a preset mode to be identified, namely, the trigger patterns are identified under the condition that a plurality of trigger patterns exist at the same time, a novel trigger pattern identification mode is provided, and the use convenience of the trigger patterns is improved.
In an alternative embodiment, the processor 51 may be configured to acquire an image through the image acquisition unit and display the image in a preview manner; identifying the acquired image; if the image comprises at least two trigger patterns, determining a target trigger pattern based on a preset mode; the trigger pattern is provided with trigger information, and the trigger information can be used for triggering and displaying an interface corresponding to the trigger information; a target trigger pattern is identified.
In an alternative embodiment, the processor 51 may be further configured to detect a fourth input operation, and store the acquired image in response to the fourth input operation.
In an alternative embodiment, the processor 51 may be further configured to, if the image includes at least two trigger patterns, output a prompt message for prompting the user that the trigger patterns can be recognized; and detecting a fifth input operation, determining a target trigger pattern based on a preset mode in response to the fifth input operation, and identifying the target trigger pattern, or displaying an identification result for identifying the trigger pattern in response to the fifth input operation.
In an alternative embodiment, when determining the target trigger pattern based on the preset mode, the processor 51 is specifically configured to: detecting a first input operation; and responding to the first input operation, and selecting one trigger pattern from the at least two trigger patterns as a target trigger pattern.
In an alternative embodiment, to increase the speed of determining the target trigger pattern, the processor 51 may detect a voice input or a gaze focus of the user, and determine the target trigger pattern based on the voice input or the gaze focus.
In an alternative embodiment, when the processor 51 determines the target trigger pattern based on the voice input, it may specifically be configured to: when the voice input of the user is detected, acquiring the position relation between the at least two trigger patterns; recognizing the input voice to obtain the position information carried in the voice; and determining the trigger pattern positioned based on the identified position information from the at least two trigger patterns as a target trigger pattern based on the identified position information.
In an alternative embodiment, when the processor 51 determines the target trigger pattern based on the voice input, it may specifically be configured to: when detecting the voice input of the user, identifying the mark of each trigger pattern; recognizing the input voice to obtain identification information carried in the voice; and determining the trigger pattern with the identified identification information as a target trigger pattern from the at least two trigger patterns based on the identified identification information.
In an alternative embodiment, when determining the target trigger pattern based on the preset mode, the processor 51 is specifically configured to: and selecting a first trigger pattern from the at least two trigger patterns as a target trigger pattern according to a selection strategy.
In an alternative embodiment, the processor 51 may be further configured to detect a second input operation; responding to a second input operation, and triggering to enter a re-identified interface; and selecting a second trigger pattern different from the first trigger pattern as a target trigger pattern according to a selection strategy.
In an alternative embodiment, when determining the target trigger pattern based on the preset mode, the processor 51 is specifically configured to: and taking all trigger patterns in the image as target trigger patterns.
In an optional embodiment, the processor 51 may be further configured to display interfaces corresponding to recognition results of all the trigger patterns; and detecting a third input operation, and responding to the third input operation to select an interface as a final display interface.
In an optional embodiment, the processor 51 may be further configured to obtain identification information; the target trigger pattern and the identification information are displayed in association to distinguish the target trigger pattern from non-target trigger patterns in the image.
In an optional embodiment, when the processor 51 displays the target trigger pattern and the identification information in a related manner, it is specifically configured to: determining a preset area with a target trigger pattern as a center; and displaying the identification information on the content in the preset area in an overlapping manner.
In an alternative embodiment, the processor 51 identifies the target trigger pattern while displaying the target trigger pattern and the identification information in association; alternatively, the target trigger pattern is identified after the target trigger pattern and the identification information are displayed in association.
Corresponding to the method embodiment, another schematic structural diagram of the processing apparatus provided in the embodiment of the present application is shown in fig. 6, and may include:
an obtaining module 61, a determining module 62 and an identifying module 63; wherein,
the obtaining module 61 is used for obtaining an image.
The image may be an image acquired by an image acquisition device of the electronic device, an image locally stored by the electronic device, or an image acquired from the internet.
The determining module 62 is configured to identify the image, and determine a target trigger pattern based on a preset mode if the image includes at least two trigger patterns; the trigger pattern is provided with trigger information, and the trigger information can be used for triggering and displaying an interface corresponding to the trigger information.
The trigger pattern can be a two-dimensional code, and the trigger information of the two-dimensional code refers to black and white patterns distributed on a plane according to a certain rule in the two-dimensional code, namely in a two-dimensional direction. Or,
the trigger pattern may be a bar code, and the trigger information included in the bar code includes a plurality of black bars and spaces having different widths arranged according to a predetermined coding rule.
Of course, the trigger pattern is not limited to the above two expressions, and may be a recognizable image of other expressions, such as a recognizable human face, an object, and the like. As long as the trigger pattern has trigger information, the trigger information can be used to trigger display of an interface corresponding to the trigger information. Wherein, the trigger pattern has trigger information that can be: the trigger pattern itself contains trigger information. The trigger pattern with trigger information may also be: the trigger pattern is associated with the trigger information, for example, after a human face is recognized, the trigger information associated with the recognized human face is acquired through the network server, and the trigger information is used for triggering an interface corresponding to the trigger information.
After obtaining the image, firstly, whether the image contains the touch pattern or not is identified, and if the image contains the touch pattern, the number of the contained touch patterns is identified.
If the image comprises two or more than two trigger patterns, determining at least part of the trigger patterns as target trigger patterns in the trigger patterns by a preset mode. The preset mode can be a manual selection mode or an automatic selection mode.
The recognition module 63 is used to recognize the target trigger pattern.
After the target trigger pattern is determined, the target trigger pattern is identified. If only one target trigger pattern is available, only the target trigger pattern is identified, and if at least two target trigger patterns are available, the at least two target trigger patterns are identified.
According to the processing device provided by the embodiment of the application, after the image containing at least two trigger patterns is obtained, the target trigger pattern is determined from the at least two patterns based on a preset mode to be identified, namely, the trigger pattern is identified under the condition that a plurality of trigger patterns exist at the same time, a novel trigger pattern identification mode is provided, and the use convenience of the trigger patterns is improved.
In an alternative embodiment, the obtaining module 61 may obtain the image through an image capturing unit and display the image in a preview manner.
In an optional embodiment, the processing apparatus may further include:
and the second detection module is used for detecting a fourth input operation and responding to the fourth input operation to store the acquired image.
In an optional embodiment, the determining module 62 may be further configured to, if the image includes at least two trigger patterns, output a prompt message, where the prompt message is used to prompt the user that the trigger patterns can be recognized;
the processing device may further include:
and the third detection module is used for detecting a fifth input operation, determining a target trigger pattern based on a preset mode in response to the fifth input operation, and identifying the target trigger pattern, or displaying an identification result for identifying the trigger pattern in response to the fifth input operation.
In an alternative embodiment, the determination module 62 may include:
a detection unit configured to detect a first input operation;
the first selection unit is used for responding to a first input operation and selecting one trigger pattern from at least two trigger patterns as a target trigger pattern.
In an alternative embodiment, in order to increase the speed of determining the target trigger pattern, the detection unit may detect a voice input or a line-of-sight focus of the user, and the first selection unit determines the target trigger pattern based on the voice input or the line-of-sight focus.
In an optional embodiment, when the detection unit detects a user voice input, the first selection unit may be specifically configured to acquire a positional relationship between the at least two trigger patterns; recognizing the input voice to obtain the position information carried in the voice; and determining the trigger pattern positioned based on the identified position information from the at least two trigger patterns as a target trigger pattern based on the identified position information.
In an optional embodiment, when the detection unit detects a user voice input, the first selection unit may be specifically configured to identify an identifier of each trigger pattern; recognizing the input voice to obtain identification information carried in the voice; and determining the trigger pattern with the identified identification information as a target trigger pattern from the at least two trigger patterns based on the identified identification information.
In another alternative embodiment, the determination module 62 may include:
and the second selection unit is used for selecting the first trigger pattern from the at least two trigger patterns as a target trigger pattern according to the selection strategy.
In an optional embodiment, the processing apparatus may further include:
the first detection module is used for detecting a second input operation;
the triggering module is used for responding to the second input operation and triggering the re-identified interface to enter;
and the first selection module is used for selecting a second trigger pattern different from the first trigger pattern as a target trigger pattern according to the selection strategy.
In an alternative embodiment, the determination module 62 may include:
and the third selection unit is used for taking all the trigger patterns in the image as target trigger patterns.
In an optional embodiment, the processing apparatus may further include:
the first display output module is used for displaying interfaces corresponding to the identification results of all the trigger patterns;
and the second selection module is used for detecting a third input operation and responding to the third input operation to select an interface as a final display interface.
In an optional embodiment, the processing apparatus may further include:
the acquisition module is used for acquiring the identification information;
and the second display output module is used for displaying the target trigger pattern and the identification information in a linkage manner so as to enable the target trigger pattern to be different from the non-target trigger pattern in the image.
In an alternative embodiment, the second display output module includes:
a determination unit for determining a preset region centered on the target trigger pattern;
and the display output unit is used for displaying the identification information on the content in the preset area in an overlapping manner.
In an alternative embodiment, the recognition module 63 recognizes the target trigger pattern while the second display output module displays the target trigger pattern and the identification information in association.
In an alternative embodiment, the recognition module 63 recognizes the target trigger pattern after the second display output module displays the target trigger pattern and the identification information in association.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
It should be understood that, in the embodiments of the present application, the respective embodiments and features corresponding to each other may be combined and combined with each other to achieve the solution of the foregoing technical problem.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of processing, comprising:
obtaining an image;
identifying the image, and if the image comprises at least two trigger patterns, determining a target trigger pattern based on a preset mode; the trigger pattern is provided with trigger information, and the trigger information can be used for triggering and displaying an interface corresponding to the trigger information;
identifying the target trigger pattern.
2. The method of claim 1, wherein determining the target trigger pattern based on a preset pattern comprises:
detecting a first input operation;
in response to the first input operation, a trigger pattern is selected from the at least two trigger patterns as a target trigger pattern.
3. The method of claim 1, wherein determining the target trigger pattern based on a preset pattern comprises:
and selecting a first trigger pattern from the at least two trigger patterns as a target trigger pattern according to a selection strategy.
4. The method of claim 3, further comprising:
detecting a second input operation;
responding to the second input operation, and triggering to enter a re-identified interface;
and selecting a second trigger pattern different from the first trigger pattern as a target trigger pattern according to the selection strategy.
5. The method of claim 1, wherein determining the target trigger pattern based on a preset pattern comprises: all trigger patterns in the image are taken as target trigger patterns; the method further comprises the following steps:
displaying interfaces corresponding to the recognition results of all the trigger patterns;
and detecting a third input operation, and responding to the third input operation to select an interface as a final display interface.
6. The method of claim 1, further comprising:
acquiring identification information;
displaying the target trigger pattern and the identification information in association to distinguish the target trigger pattern from non-target trigger patterns in the image.
7. The method of claim 6, wherein displaying the target trigger pattern and the identification information in association comprises:
determining a preset area with the target trigger pattern as a center;
and displaying the identification information on the content in the preset area in an overlapping manner.
8. The method of claim 6, wherein the target trigger pattern and the identification information are displayed in association while the target trigger pattern is recognized; or after the target trigger pattern and the identification information are displayed in a connected manner, the target trigger pattern is identified.
9. A processing apparatus, comprising: a processor, and a memory communicatively coupled with the processor;
the processor is used for obtaining an image; identifying the image, and if the image comprises at least two trigger patterns, determining a target trigger pattern based on a preset mode; identifying the target trigger pattern; the trigger pattern is provided with trigger information, and the trigger information can be used for triggering and displaying an interface corresponding to the trigger information.
10. A processing apparatus, comprising:
the acquisition module is used for acquiring an image;
the determining module is used for identifying the image, and if the image comprises at least two trigger patterns, determining a target trigger pattern based on a preset mode; the trigger pattern is provided with trigger information, and the trigger information can be used for triggering and displaying an interface corresponding to the trigger information;
and the identification module is used for identifying the target trigger pattern.
CN201710907874.9A 2017-09-29 2017-09-29 Processing method and processing device Pending CN107609452A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710907874.9A CN107609452A (en) 2017-09-29 2017-09-29 Processing method and processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710907874.9A CN107609452A (en) 2017-09-29 2017-09-29 Processing method and processing device

Publications (1)

Publication Number Publication Date
CN107609452A true CN107609452A (en) 2018-01-19

Family

ID=61067826

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710907874.9A Pending CN107609452A (en) 2017-09-29 2017-09-29 Processing method and processing device

Country Status (1)

Country Link
CN (1) CN107609452A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113128244A (en) * 2021-03-12 2021-07-16 维沃移动通信有限公司 Scanning method and device and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070119941A1 (en) * 2005-11-30 2007-05-31 Symbol Technologies, Inc. Bar code scanner programming
CN102779264A (en) * 2012-07-10 2012-11-14 北京恒信彩虹科技有限公司 Method and device for realizing barcode recognition
CN103870488A (en) * 2012-12-13 2014-06-18 联想(北京)有限公司 File obtaining method and electronic device
CN103955660A (en) * 2014-04-22 2014-07-30 广州闪购软件服务有限公司 Method for recognizing batch two-dimension code images
CN104573597A (en) * 2013-10-10 2015-04-29 腾讯科技(深圳)有限公司 Two-dimension code identification method and identification device
CN106156685A (en) * 2016-07-07 2016-11-23 立德高科(昆山)数码科技有限责任公司 The method of multiple Quick Response Codes, device and the terminal that recognition is in the same area
CN106874817A (en) * 2016-07-27 2017-06-20 阿里巴巴集团控股有限公司 Two-dimensional code identification method, equipment and mobile terminal

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070119941A1 (en) * 2005-11-30 2007-05-31 Symbol Technologies, Inc. Bar code scanner programming
CN102779264A (en) * 2012-07-10 2012-11-14 北京恒信彩虹科技有限公司 Method and device for realizing barcode recognition
CN103870488A (en) * 2012-12-13 2014-06-18 联想(北京)有限公司 File obtaining method and electronic device
CN104573597A (en) * 2013-10-10 2015-04-29 腾讯科技(深圳)有限公司 Two-dimension code identification method and identification device
CN103955660A (en) * 2014-04-22 2014-07-30 广州闪购软件服务有限公司 Method for recognizing batch two-dimension code images
CN106156685A (en) * 2016-07-07 2016-11-23 立德高科(昆山)数码科技有限责任公司 The method of multiple Quick Response Codes, device and the terminal that recognition is in the same area
CN106874817A (en) * 2016-07-27 2017-06-20 阿里巴巴集团控股有限公司 Two-dimensional code identification method, equipment and mobile terminal

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113128244A (en) * 2021-03-12 2021-07-16 维沃移动通信有限公司 Scanning method and device and electronic equipment
WO2022188803A1 (en) * 2021-03-12 2022-09-15 维沃移动通信有限公司 Scanning method and apparatus, and electronic device

Similar Documents

Publication Publication Date Title
JP6392467B2 (en) Region identification method and apparatus
US10509989B2 (en) Method and apparatus for recognizing characters
CN103336576B (en) A kind of moving based on eye follows the trail of the method and device carrying out browser operation
CN104123520B (en) Two-dimensional code scanning method and device
JP6401873B2 (en) Region recognition method and apparatus
US9589198B2 (en) Camera based method for text input and keyword detection
CN105426818B (en) Method for extracting region and device
CN108021320B (en) Electronic equipment and item searching method thereof
EP2093697B1 (en) Method and arrangement for retrieving information comprised in a barcode
CN109800731B (en) Fingerprint input method and related device
US9239961B1 (en) Text recognition near an edge
US20170124412A1 (en) Method, apparatus, and computer-readable medium for area recognition
US20170060259A1 (en) Information processing method and electronic device
CN105975550B (en) Question searching method and device of intelligent equipment
CN106529255B (en) Method for identifying ID and device based on person's handwriting fingerprint
CN111401238A (en) Method and device for detecting character close-up segments in video
CN107239202B (en) Control instruction identification method and device
CN112749769B (en) Graphic code detection method, graphic code detection device, computer equipment and storage medium
CN107609452A (en) Processing method and processing device
CN113128244A (en) Scanning method and device and electronic equipment
CN112183149B (en) Graphic code processing method and device
JP2014219822A (en) Content display device, content display method, program, and content display system
CN107369130A (en) A kind of image processing method and terminal
CN107872730A (en) The acquisition methods and device of a kind of insertion content in video
JP6408055B2 (en) Information processing apparatus, method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180119