Disclosure of Invention
The invention provides a voice control method and device based on an interactive page and a computer readable storage medium, and mainly aims to solve the problem of low efficiency of voice control.
In order to achieve the above object, the present invention provides a voice control method based on an interactive page, including:
Acquiring a display image of an interaction page, and performing image segmentation on the display image to obtain a plurality of subgraphs;
generating feature codes respectively corresponding to the display image and the plurality of sub-images, judging whether the feature codes respectively corresponding to the display image or the plurality of sub-images exist in a preset data table, acquiring operation texts contained in the display image or the plurality of sub-images according to a judging result, and storing the acquired operation texts in an operation text cache in a cache mode;
Receiving control voice aiming at the interactive page, and converting the control voice into control text;
and determining an operation text which meets the matching condition with the control text in the operation text cache library as a target operation text, and controlling the interaction page to execute the operation corresponding to the target operation text.
Optionally, the generating the feature codes corresponding to the display image and the plurality of sub-images respectively includes:
compressing the display image and the plurality of sub-images into preset sizes to obtain a plurality of compressed images, and graying the plurality of compressed images;
And calculating the gray characteristic value of each pixel point in the compressed images, and combining the gray characteristic values to obtain the characteristic codes of the display image and the subgraphs.
Optionally, the determining whether the feature codes corresponding to the display image and the plurality of sub-images exist in the preset data table includes:
extracting the feature codes corresponding to the display images and searching in the data table;
if the feature codes corresponding to the display images are retrieved, judging that the feature codes corresponding to the display images exist in the data table;
if the feature codes corresponding to the display images cannot be searched, the feature codes corresponding to the plurality of sub-images are selected one by one to be searched in the data table, and whether the feature codes corresponding to all the sub-images can be searched is judged;
If the feature codes corresponding to all the subgraphs can be retrieved, judging that the feature codes corresponding to all the subgraphs exist in the data table;
If the feature code corresponding to any sub-graph is not searched, judging that the display image and the feature codes corresponding to a plurality of sub-graphs do not exist in the data table;
if the feature codes corresponding to part of the sub-graphs can be retrieved, judging that part of the feature codes corresponding to the sub-graphs exist in the data table.
Alternatively, the process may be carried out in a single-stage,
The step of obtaining the operation text contained in the display image or the plurality of sub-images according to the judging result comprises the following steps:
when the feature codes corresponding to the display images are judged to exist in the data table, extracting texts corresponding to the feature codes corresponding to the display images from the data table as operation texts contained in the display images;
when judging that the feature codes corresponding to all the subgraphs exist in the data table, extracting texts corresponding to the feature codes corresponding to all the subgraphs from the data table as operation texts contained in all the subgraphs;
When judging that part of feature codes corresponding to the subgraphs exist in the data table, determining the subgraphs which do not exist in the data table as target images, and acquiring the operation text according to the target images;
when the display image and the feature codes corresponding to the sub-images are judged to be absent in the data table, determining the display image and the sub-images as target images;
Wherein the obtaining the operation text contained in the display image or the plurality of sub-images according to the target image comprises:
calculating to obtain similarity scores of the images stored in the data table and the target image by using a preset image similarity algorithm, and judging whether the similarity scores are larger than a preset threshold value or not;
When the similarity score is larger than a preset threshold value, extracting a text corresponding to the target image from the data table as an operation text contained in the target image;
and when the similarity score is smaller than or equal to a preset threshold value, performing text recognition on the target image to obtain a recognition text, and taking the recognition text as an operation text contained in the target image.
Optionally, the image segmentation is performed on the display image to obtain a plurality of subgraphs, including:
extracting the characteristics of the display image by utilizing a pre-constructed segmentation network to obtain multiple image characteristics of the display image;
and carrying out image segmentation on the display image according to the multiple image characteristics to obtain a plurality of subgraphs of the display image.
Optionally, the feature extraction of the display image by using a pre-constructed segmentation network to obtain multiple image features of the display image includes:
Carrying out convolution processing and pooling processing on the display image by utilizing the segmentation network to obtain a pooled image;
carrying out full connection processing on the pooled images to obtain a full connection feature map;
and carrying out multi-scale feature extraction on the full-connection feature map to obtain multiple image features of the display image.
Optionally, the determining that the operation text in the operation text cache library, the matching degree of which with the control text meets the matching condition, is the target operation text includes:
calculating the similarity between the cached operation texts in the operation text cache library and the voice texts one by one, and judging whether the similarity is larger than a first threshold value;
If the similarity is larger than the first threshold, selecting a cache operation text corresponding to the maximum similarity from the similarity larger than the threshold as a target operation text;
If the similarity is not greater than the first threshold, performing pronunciation unit recognition on the control voice, and determining a target operation text according to a recognition result.
In order to solve the above problems, the present invention also provides an interactive page-based voice control device based on an interactive page, the device comprising:
the image acquisition module is used for acquiring a display image of the interactive page, and carrying out image segmentation on the display image to obtain a plurality of subgraphs;
The image text acquisition module is used for generating the feature codes respectively corresponding to the display image and the plurality of sub-images, judging whether the feature codes respectively corresponding to the display image and the plurality of sub-images exist in a preset data table, acquiring operation texts contained in the display image or the plurality of sub-images according to a judging result, and storing the acquired operation texts in an operation text cache library in a cache mode;
The control voice acquisition module is used for receiving control voice aiming at the interaction page and converting the control voice into a control text;
And the voice control module is used for determining an operation text which meets the matching condition with the control text in the operation text cache library as a target operation text and controlling the interaction page to execute the operation corresponding to the target operation text.
In order to solve the above-mentioned problems, the present invention also provides an electronic apparatus including:
at least one processor, and
A memory communicatively coupled to the at least one processor, wherein,
The memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the interactive page based speech control method described above.
In order to solve the above-mentioned problems, the present invention also provides a computer-readable storage medium having stored therein at least one computer program that is executed by a processor in an electronic device to implement the above-mentioned interactive page based voice control method.
According to the embodiment of the invention, after the page interactive display image is acquired, the operation text contained in the display image is rapidly extracted and stored in the operation text buffer library through the image segmentation and feature code query method, so that the operation text contained in the interactive display image can be rapidly multiplexed, and after the control text is acquired according to the control voice, the operation text matched with the control text is directly searched from the operation text buffer library, thereby improving the efficiency of determining the operation text corresponding to the control voice during voice control and further improving the voice control efficiency. In addition, the scheme can be used for rapidly and accurately operating a large number of users and different large numbers of pages, and user experience is improved. Therefore, the interactive page-based voice control method, the interactive page-based voice control device, the electronic equipment and the computer-readable storage medium can solve the problem of low voice control efficiency.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The embodiment of the application provides a voice control method based on an interactive page. The execution subject of the interactive page-based voice control method includes, but is not limited to, at least one of a server, a terminal, and the like, which can be configured to execute the method provided by the embodiment of the application. In other words, the interactive page based voice control method may be performed by software or hardware installed in a terminal device or a server device, and the software may be a blockchain platform. The server side comprises, but is not limited to, a single server, a server cluster, a cloud server or a cloud server cluster and the like. The server may be an independent server, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, security services, content delivery networks (ContentDelivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms.
Referring to fig. 1, a flowchart of a voice control method based on an interactive page according to an embodiment of the present invention is shown. In this embodiment, the voice control method based on the interaction page includes:
S1, acquiring a display image of an interaction page, and performing image segmentation on the display image to obtain a plurality of subgraphs.
In the embodiment of the invention, the display image can be obtained by directly reading the current page to capture the screen or by calling the current page through a system function, and the obtained display image comprises image information.
In the embodiment of the present invention, the image segmentation is performed on the display image to obtain a plurality of subgraphs, including:
extracting the characteristics of the display image by utilizing a pre-constructed segmentation network to obtain multiple image characteristics of the display image;
and carrying out image segmentation on the display image according to the multiple image characteristics to obtain a plurality of subgraphs of the display image.
In the embodiment of the invention, the segmentation network can adopt a convolutional neural network with a feature extraction function, such as Segnet network, E-net network, V-net network, resNet network and the like.
In an embodiment of the present invention, the segmentation network adopts ResNet network, and an FPN feature pyramid structure is added in the segmentation network, so as to extract multiple features of the display image, and obtain multiple image features corresponding to the display image.
Specifically, the feature extraction of the display image by using the pre-constructed segmentation network to obtain multiple image features of the display image includes:
Carrying out convolution processing and pooling processing on the display image by utilizing the segmentation network to obtain a pooled image;
carrying out full connection processing on the pooled images to obtain a full connection feature map;
and carrying out multi-scale feature extraction on the full-connection feature map to obtain multiple image features of the display image.
For example, a display image A exists, convolution, pooling and full connection processing are carried out on the display image A by utilizing a segmentation network, a full connection feature map corresponding to the display image A is obtained, multiple feature extraction is carried out on the full connection feature map by utilizing a FPN feature pyramid structure containing 5 layers of output in the segmentation network, and image features with 5 different sizes of multiple image features of the display image A can be obtained.
Further, the image segmentation is performed on the display image according to multiple image features to obtain a plurality of subgraphs of the display image, including:
Selecting a segmentation block diagram corresponding to the multiple image features from preset segmentation block diagrams;
and selecting the display image according to the segmentation block diagram for multiple times, and taking different images obtained by frame selection as multiple subgraphs.
For example, the multiple image features are five-size image features, corresponding segmentation block diagrams are selected from preset segmentation block diagrams according to the five-size image features, and then the corresponding segmentation block diagrams are used for frame selection one by one in the display image, so that a frame-selected sub-image is obtained.
In the embodiment of the invention, the image features with various scales can be obtained by extracting the multiple features of the display image, which is beneficial to improving the accuracy of the sub-images obtained by segmentation.
S2, generating feature codes corresponding to the display image and the sub-images respectively, judging whether the feature codes corresponding to the display image and the sub-images respectively exist in a preset data table, acquiring operation texts contained in the display image or the sub-images according to a judging result, and storing the acquired operation texts in an operation text buffer library in a buffer mode.
In the embodiment of the present invention, the feature codes corresponding to the plurality of sub-images of the display image may be MD5 codes or hash codes formed by representing the image.
In the embodiment of the invention, the data table is a pre-stored image information database, contains a large number of images and subgraphs, and also contains information (such as feature codes and texts) corresponding to the images and subgraphs.
In the embodiment of the present invention, the generating the feature codes corresponding to the display image and the plurality of sub-images respectively includes:
compressing the display image and the plurality of sub-images into preset sizes to obtain a plurality of compressed images, and graying the plurality of compressed images;
And calculating the gray characteristic value of each pixel point in the compressed images, and combining the gray characteristic values to obtain the characteristic codes respectively corresponding to the display image and the subgraphs.
In the embodiment of the invention, the gray characteristic value can be a gray difference value, a gray average value and the like, if the gray characteristic value is the gray difference value, the gray difference value of adjacent pixel points in the compressed image is calculated, the gray difference value is obtained by processing the difference value, the gray difference value (such as positive number or 0 is recorded as 1, negative number is recorded as 0) is analyzed to obtain the characteristic values of the displayed image and the plurality of sub-images, if the gray characteristic value is the gray average value, the average value of the gray values of all pixel points in the compressed image is calculated to be the gray average value, and the characteristic codes of the images of the interactive page and the plurality of sub-images are obtained by analyzing according to the gray average value (such as the gray average value is recorded as 1 or less than the gray average value and the gray average value is recorded as 0).
In another optional embodiment of the present invention, the displayed image and the plurality of sub-images may be converted into a character array, and the MD5 code of the displayed image and the plurality of sub-images may be obtained by performing MD5 encoding on the character data, where the MD5 code may be used as the feature code.
In an optional embodiment of the present invention, before the obtaining, according to the determination result, the operation text included in the display image or the plurality of sub-images, the method further includes:
Forming a mapping relation between the display image and the plurality of subgraphs and feature codes corresponding to the display image and the plurality of subgraphs;
Generating operation texts contained in the display image and the plurality of sub-images, forming mapping relations between the display image and the plurality of sub-images and the texts contained in the display image and the plurality of sub-images, and storing the mapping relations into a data table.
In the embodiment of the invention, the corresponding text can be obtained by labeling the display image and the plurality of sub-images, or the text in the image is identified by a pre-trained model to obtain the operation text.
In the embodiment of the present invention, after the mapping relationship formed by the display image and the plurality of sub-images and the feature codes corresponding to the display image and the plurality of sub-images is filled into the data table, corresponding list labels may also be generated according to the display image and the plurality of sub-images, the feature codes corresponding to the display image and the plurality of sub-images, and the text corresponding to the display image and the plurality of sub-images. For example, the list labels corresponding to the display image and the list of the plurality of sub-images are images, the list labels of the feature codes corresponding to the display image and the plurality of sub-images are feature codes, and the list labels of the texts corresponding to the display image and the plurality of sub-images are texts.
In the embodiment of the present invention, referring to fig. 2, the determining whether the feature codes corresponding to the display image and the plurality of sub-images exist in the preset data table includes:
S21, extracting feature codes corresponding to the display images and searching in the data table;
if the feature codes corresponding to the display images are retrieved, S22 is executed, and the feature codes corresponding to the display images are judged to exist in the data table;
If the feature codes corresponding to the display images cannot be retrieved, executing S23, selecting the feature codes corresponding to the plurality of sub-images one by one, retrieving the feature codes in the data table, and judging whether the feature codes corresponding to all the sub-images can be retrieved;
if the feature codes corresponding to all the subgraphs can be retrieved, executing S24, and judging that the feature codes corresponding to all the subgraphs exist in the data table;
If the feature code corresponding to any sub-graph is not searched, S25 is executed, and the fact that the display image and the feature codes corresponding to a plurality of sub-graphs do not exist in the data table is judged;
and if the feature codes corresponding to part of the sub-graphs can be retrieved, executing S26, and judging that part of the feature codes corresponding to the sub-graphs exist in the data table.
In the embodiment of the present invention, the obtaining, according to the determination result, the operation text included in the display image or the plurality of sub-images includes:
when the feature codes corresponding to the display images are judged to exist in the data table, extracting texts corresponding to the feature codes corresponding to the display images from the data table as operation texts contained in the display images;
when judging that the feature codes corresponding to all the subgraphs exist in the data table, extracting texts corresponding to the feature codes corresponding to all the subgraphs from the data table as operation texts contained in all the subgraphs;
When judging that part of feature codes corresponding to the subgraphs exist in the data table, determining the subgraphs which do not exist in the data table as target images, and acquiring the operation text according to the target images;
when the display image and the feature codes corresponding to the sub-images are judged to be absent in the data table, determining the display image and the sub-images as target images;
Wherein the obtaining the operation text contained in the display image or the plurality of sub-images according to the target image comprises:
calculating to obtain similarity scores of the images stored in the data table and the target image by using a preset image similarity algorithm, and judging whether the similarity scores are larger than a preset threshold value or not;
When the similarity score is larger than a preset threshold value, extracting a text corresponding to the target image from the data table as an operation text contained in the target image;
and when the similarity score is smaller than or equal to a preset threshold value, performing text recognition on the target image to obtain a recognition text, and taking the recognition text as an operation text contained in the target image.
In the embodiment of the invention, the number of the texts in the data table is more than the number of the texts in the operation text cache, and the texts in the operation text cache only comprise the texts corresponding to the display images of the interactive pages and the subgraphs of the display images of the interactive pages.
In the embodiment of the invention, the images, the feature codes corresponding to the images and the corresponding texts corresponding to the images in the data table can be associated and stored, because the texts corresponding to the images can be extracted according to the feature codes corresponding to the images, and after the extracted texts are stored in the operation text cache, all the texts corresponding to the images can be obtained from the operation text cache.
In the embodiment of the invention, if the feature codes cannot be searched in the data table, image recognition is needed to judge whether similar images exist in the data table.
In the implementation of the method, the image similarity algorithm comprises, but is not limited to, a SIFT algorithm, a perceptual hash algorithm and a template matching algorithm, wherein the image similarity algorithm is used for carrying out similarity calculation on images which cannot be identified by the feature codes and prestored images in a data table one by one to obtain a similarity value, and the images in the data table with the similarity value larger than a preset threshold value are selected to serve as the target images, and texts corresponding to the target images are operation texts.
In the embodiment of the invention, if the feature code corresponding to the image cannot be retrieved in the data table, and the image with the result larger than the preset threshold cannot be obtained by utilizing the image similarity algorithm, the image possibly does not exist in the data table, so that text recognition is needed, and the text of the corresponding image is obtained.
In the embodiment of the present invention, the text recognition of the target image to obtain a recognized text includes:
acquiring all characters in the target image by using a segmentation algorithm;
and carrying out text recognition on all the characters by using a pre-trained text recognition network to obtain a recognition text.
Specifically, the acquiring all characters in the target image by using a segmentation algorithm includes:
performing horizontal projection on the corresponding image, and acquiring an upper limit and a lower limit of each row after projection;
Cutting according to the upper limit and the lower limit;
Performing vertical projection on each cut line, and acquiring a left limit and a right limit of each character after projection;
and cutting each character according to the left limit and the right limit to obtain a plurality of characters.
In the embodiment of the invention, the text recognition network can be obtained by training a CNN neural network, predicting the CNN neural network by inputting text training data, and optimizing the neural network by calculating a loss value according to a predicted result to obtain the text recognition network.
In another alternative embodiment of the present invention, a CTPN depth neural network, a SegLink algorithm, etc. may also be used to perform text detection and character segmentation on the target image.
S3, receiving control voice aiming at the interaction page, and converting the control voice into a control text.
In an embodiment of the present invention, the converting the control speech into the control text includes:
Extracting voice characteristics of the control voice to obtain characteristic vectors;
inputting the feature vector into a preset acoustic model to obtain phoneme information;
obtaining a plurality of phoneme fragments according to the number of the phonemes in the phoneme information;
Each phoneme fragment is searched in a preset word stock one by one;
taking a text corresponding to the phoneme fragment retrieved in the word stock as the control text;
and performing approximate sound conversion on the phoneme fragments which are not searched in the word stock, and re-matching the converted phoneme fragments with the word stock.
In the embodiment of the invention, the acoustic model comprises but is not limited to an HMM (hidden Markov model), the tone-by-tone matching according to the number of preset phonemes can be performed by matching with the word stock in a mode of single tone, two tones, three tones and four tones, and the word stock comprises popular words, common words, standard words divided according to parts of speech (such as adjectives, nouns, adverbs and the like) and the like.
In the embodiment of the invention, the approximate sound conversion comprises initial conversion and pronunciation conversion. For example, when the initial is F or H, the first letters F, H are replaced with the corresponding phonemes and recombined with the replaced part, when the initial is L, M, N or R, the first letters L, M, N, R are replaced with the un-replaced part and recombined with the un-replaced part, and when the conversion of the sound of Z, C, S and ZH, ch and Sh flat-raising tongue sounds exists, the conversion of the flat-raising tongue is performed and recombined with the un-replaced part.
S4, determining an operation text which meets the matching condition with the control text in the operation text cache library as a target operation text, and controlling the interaction page to execute the operation corresponding to the target operation text.
In the embodiment of the invention, before the operation corresponding to the target text is executed, the operation text in the text cache library can be matched with the corresponding operation code, and the corresponding operation can be executed through the operation code. For example, there are operation texts of "movie a", "next page", and the operation codes corresponding to them may be "skip detail page" and "skip next page", respectively.
In the embodiment of the present invention, referring to fig. 3, the determining that the operation text in the operation text repository, whose matching degree with the control text satisfies the matching condition, is a target operation text includes:
s41, calculating the similarity between the cached operation text and the voice text in the operation text cache one by one, and judging whether the similarity is larger than a first threshold value;
if the similarity is larger than the first threshold, executing S42, and selecting a cache operation text corresponding to the maximum similarity from the similarity larger than the threshold as a target operation text;
If the similarity is not greater than the first threshold, executing S43, performing pronunciation unit recognition on the control voice, and determining a target operation text according to a recognition result.
Further, the embodiment of the invention can calculate the similarity between the operation text and the control text in the text cache by the following formula:
Wherein cos θ is the similarity score, a is the control text, and b i is the ith operation text in the text repository.
Specifically, the method for identifying the pronunciation unit by using the control voice, and determining the target operation text according to the identification result includes:
dividing the control voice into a plurality of pronunciation units and selecting one of the pronunciation units one by one as a target pronunciation unit;
calculating a minimum distance according to the target pronunciation unit and a preset standard pronunciation unit, and judging whether the minimum distance is smaller than a second threshold value or not;
If the minimum distance is smaller than a second threshold value, determining that the text corresponding to the standard pronunciation unit is a target operation text;
And if the minimum distance is greater than or equal to a second threshold value, determining that the target operation text is not found, and outputting the result of not finding the target operation text to the user.
Further, the embodiment of the invention can calculate the minimum distance between the target pronunciation unit and the preset standard pronunciation unit through the following formula:
Wherein D is the similarity, R is the target pronunciation unit, T is the standard pronunciation unit, and θ is a preset coefficient.
According to the embodiment of the invention, after the page interactive display image is acquired, the operation text contained in the display image is rapidly extracted and stored in the operation text buffer library through the image segmentation and feature code query method, so that the operation text contained in the interactive display image can be rapidly multiplexed, and after the control text is acquired according to the control voice, the operation text matched with the control text is directly searched from the operation text buffer library, thereby improving the efficiency of determining the operation text corresponding to the control voice during voice control and further improving the voice control efficiency. In addition, the scheme can be used for rapidly and accurately operating a large number of users and different large numbers of pages, and user experience is improved. Therefore, the voice control method based on the interactive page can solve the problem of low efficiency of the whole voice control implementation process.
Fig. 4 is a functional block diagram of a voice control device based on interactive pages according to an embodiment of the present invention.
The voice control apparatus 100 based on the interactive page according to the present invention may be installed in an electronic device. The interactive page based voice control apparatus 100 may include an image acquisition module 101, an image text acquisition module 102, a control voice acquisition module 103, and a voice control module 104 according to the implemented functions. The module of the invention, which may also be referred to as a unit, refers to a series of computer program segments, which are stored in the memory of the electronic device, capable of being executed by the processor of the electronic device and of performing a fixed function.
In the present embodiment, the functions concerning the respective modules/units are as follows:
the image acquisition module 101 is configured to acquire a display image of an interaction page, and perform image segmentation on the display image to obtain a plurality of subgraphs;
The image text obtaining module 102 is configured to generate feature codes corresponding to the display image and the plurality of sub-images respectively, determine whether feature codes corresponding to the display image and the plurality of sub-images respectively exist in a preset data table, obtain an operation text included in the display image or the plurality of sub-images according to a determination result, and store the obtained plurality of operation texts in an operation text cache in a cache form;
The control voice acquisition module 103 is configured to receive control voice for the interactive page, and convert the control voice into control text;
The voice control module 104 is configured to determine that an operation text in the operation text cache library, where the matching degree between the operation text and the control text meets a matching condition, is a target operation text, and control the interaction page to execute an operation corresponding to the target operation text.
In detail, each module in the interactive page based voice control apparatus 100 in the embodiment of the present invention adopts the same technical means as the interactive page based voice control method described in fig. 1 to 3, and can generate the same technical effects, which is not described herein.
Fig. 5 is a schematic structural diagram of an electronic device for implementing a voice control method based on an interactive page according to an embodiment of the present invention.
The electronic device 1 may comprise a processor 10, a memory 11, a communication bus 12 and a communication interface 13, and may further comprise a computer program, such as a speech control program, stored in the memory 11 and executable on the processor 10.
The processor 10 may be formed by an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be formed by a plurality of integrated circuits packaged with the same function or different functions, including one or more central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the entire electronic device using various interfaces and lines, and executes various functions of the electronic device and processes data by running or executing programs or modules (e.g., executing a voice Control program, etc.) stored in the memory 11, and calling data stored in the memory 11.
The memory 11 includes at least one type of readable storage medium including flash memory, a removable hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device, such as a mobile hard disk of the electronic device. The memory 11 may also be an external storage device of the electronic device in other embodiments, such as a plug-in mobile hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the electronic device. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device. The memory 11 may be used not only for storing application software installed in an electronic device and various types of data, such as codes of a voice control program, but also for temporarily storing data that has been output or is to be output.
The communication bus 12 may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. The bus is arranged to enable a connection communication between the memory 11 and at least one processor 10 etc.
The communication interface 13 is used for communication between the electronic device and other devices, including a network interface and a user interface. Optionally, the network interface may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), typically used to establish a communication connection between the electronic device and other electronic devices. The user interface may be a Display (Display), an input unit such as a Keyboard (Keyboard), or alternatively a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the electronic device and for displaying a visual user interface.
Fig. 5 shows only an electronic device with components, it being understood by a person skilled in the art that the structure shown in fig. 5 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or may combine certain components, or may be arranged in different components.
For example, although not shown, the electronic device may further include a power source (such as a battery) for supplying power to the respective components, and preferably, the power source may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management, and the like are implemented through the power management device. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The electronic device may further include various sensors, bluetooth modules, wi-Fi modules, etc., which are not described herein.
It should be understood that the embodiments described are for illustrative purposes only and are not limited to this configuration in the scope of the patent application.
The voice control program stored in the memory 11 of the electronic device 1 is a combination of instructions that, when executed in the processor 10, enable:
Acquiring a display image of an interaction page, and performing image segmentation on the display image to obtain a plurality of subgraphs;
Generating feature codes respectively corresponding to the display image and the plurality of sub-images, judging whether the feature codes respectively corresponding to the display image and the plurality of sub-images exist in a preset data table, acquiring operation texts contained in the display image or the plurality of sub-images according to a judging result, and storing the acquired operation texts in an operation text cache in a cache mode;
Receiving control voice aiming at the interactive page, and converting the control voice into control text;
and determining an operation text which meets the matching condition with the control text in the operation text cache library as a target operation text, and controlling the interaction page to execute the operation corresponding to the target operation text.
In particular, the specific implementation method of the above instructions by the processor 10 may refer to the description of the relevant steps in the corresponding embodiment of the drawings, which is not repeated herein.
Further, the modules/units integrated in the electronic device 1 may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. The computer readable storage medium may be volatile or nonvolatile. For example, the computer readable medium may include any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM).
The present invention also provides a computer readable storage medium storing a computer program which, when executed by a processor of an electronic device, can implement:
Acquiring a display image of an interaction page, and performing image segmentation on the display image to obtain a plurality of subgraphs;
Generating feature codes respectively corresponding to the display image and the plurality of sub-images, judging whether the feature codes respectively corresponding to the display image and the plurality of sub-images exist in a preset data table, acquiring operation texts contained in the display image or the plurality of sub-images according to a judging result, and storing the acquired operation texts in an operation text cache in a cache mode;
Receiving control voice aiming at the interactive page, and converting the control voice into control text;
and determining an operation text which meets the matching condition with the control text in the operation text cache library as a target operation text, and controlling the interaction page to execute the operation corresponding to the target operation text.
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The blockchain (Blockchain), essentially a de-centralized database, is a string of data blocks that are generated in association using cryptographic methods, each of which contains information from a batch of network transactions for verifying the validity (anti-counterfeit) of its information and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Wherein artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) is the theory, method, technique, and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend, and expand human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the system claims can also be implemented by means of software or hardware by means of one unit or means. The terms first, second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.