IL280842A - Apparatus system and method of interacting with a web page - Google Patents
Apparatus system and method of interacting with a web pageInfo
- Publication number
- IL280842A IL280842A IL280842A IL28084221A IL280842A IL 280842 A IL280842 A IL 280842A IL 280842 A IL280842 A IL 280842A IL 28084221 A IL28084221 A IL 28084221A IL 280842 A IL280842 A IL 280842A
- Authority
- IL
- Israel
- Prior art keywords
- interaction
- web page
- code
- commands
- graphic
- Prior art date
Links
- 238000000034 method Methods 0.000 title description 18
- 230000003993 interaction Effects 0.000 claims description 111
- 238000012545 processing Methods 0.000 claims description 23
- 230000004044 response Effects 0.000 claims description 6
- 238000003860 storage Methods 0.000 claims description 6
- 230000002452 interceptive effect Effects 0.000 claims description 5
- 230000015654 memory Effects 0.000 description 19
- 230000000007 visual effect Effects 0.000 description 14
- 230000009471 action Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 235000013550 pizza Nutrition 0.000 description 2
- 241000207836 Olea <angiosperm> Species 0.000 description 1
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 239000000872 buffer Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013497 data interchange Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 235000013305 food Nutrition 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 238000013515 script Methods 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/954—Navigation, e.g. using categorised browsing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/951—Indexing; Web crawling techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/02—Methods for producing synthetic speech; Speech synthesisers
- G10L13/027—Concept to speech synthesisers; Generation of natural phrases from machine-based concepts
Description
280842/ טנרטניא ףד םע תידדה תורשקתהל הטישו תכרעמ רישכמ APPARATUS SYSTEM AND METHOD OF INTERACTING WITH A WEB PAGE TECHNICAL FIELD id="p-1" id="p-1" id="p-1" id="p-1"
id="p-1"
[001] Some embodiments described herein generally related to a web page and, more specifically, to a markup language for building web pages.
BACKGROUND id="p-2" id="p-2" id="p-2" id="p-2"
id="p-2"
[002] The way to interact with a web page is by interacting with visual elements such as, for example, links, buttons, and input fields. The interaction is done by clicking or pressing on the visual elements, e.g., images, text on the web page, via a mouse and/or a keyboard. [003] Some solutions for accessibility may include adding a markup code to existing HyperText Markup Language (HTML) code of the web page to allow "Screen Reader" software to read and speak the web page content. [004] However, in this type of solution, the user can not know which interactions, such as making a phone call, ordering food, creating a post, and purchasing goods, are supported on the web page. [005] Voice interaction with the web page requires special and custom development and integration with text to speech (TTS) and/or speech to text (STT) engines. [006] The way to interact with the web page is by using visual elements. Thus, people with disabilities, e.g., blind men, cannot interact with the web page. 280842/ SUMMARY id="p-7" id="p-7" id="p-7" id="p-7"
id="p-7"
[007] Embodiments related to a system, a method, and a product of interacting with a web page are described hereinbelow by the ways of example only. [008] One exemplary embodiment may include a product comprising one or more tangible computer-readable non-transitory storage media comprising program instructions for generating one or more commands for interacting with a web page, wherein execution of the program instructions by one or more processors comprising: receiving a web page with a non-graphic interaction functionality, wherein the web page comprises a code created by a first markup language configured to display the web page content to a user, and a non-graphic interaction code created by a second markup language, configured to provide the non-graphic interaction functionality to the web page; introducing the one or more interaction commands to the user; performing a non-graphic interaction between the user and the web page by receiving a non-graphic interaction command of the one or more interaction commands and providing a response based on the non-graphic interaction command. [009] One other exemplary embodiment may include a product comprising one or more tangible computer-readable non-transitory storage media comprising program instructions for generating one or more commands for interacting with a web page, wherein execution of the program instructions by one or more processors comprising: receiving a web page with a non-graphic interaction functionality, wherein the web page comprises a code created by a markup language, configured to display the web page content to a user, and a non-graphic interaction code created by a web command markup language (WCML); configured to provide the non-graphic interaction functionality to the web page; introducing the one or more interacting commands to the user; performing a non-graphic interaction between the user and the web page by receiving a non-graphic interaction command of the one or more interaction commands; and providing a response based on the non-graphic interaction command. [010] For example, the execution of the program instructions by one or more processors comprising: with the use of the non-graphic interaction code, parsing the web page code and generate a parsed code; and identifying in the parsed 280842/ code one or more components configured to provide the graphic interaction functionality to the web page. [011] For example, the execution of the program instructions by one or more processors comprising: generating a command console wherein the command console is configured to run the one or more non-graphic interaction commands. [012] For example, wherein the second markup language comprises one of an Extensible Markup Language (XML), a JavaScript Object Notation (JSON), a web command markup language (WCML). [013] For example, wherein the first language and the second language comprise the same language. [014] For example, wherein the first code is written on the body of the web page and the second code is written on the header of the web page. [015] For example, the second markup language comprises a parser configured to parse the first code and identify one or more interaction components in a parsed code. [016] For example, the processing circuitry is configured to control with the second markup language comprises an interaction engine, and execution of the program instructions of the interaction engine comprises: identifying the one or more interaction commands based on the one or more interaction components. [017] For example, the interaction engine comprises a text to speech engine, and execution of the program instructions of the text to speech engine comprises: converting the one or more interaction commands into one or more voice commands to be used by the user and/or by a machine. [018] For example, the interaction engine comprises a speech-to-text engine, and execution of the program instructions of the speech-to-text engine comprises: converting the one or more voice commands into text code to be used by the web page for interacting with the user by texting. [019] Some other exemplary embodiment disclosed an apparatus and/or a program for building a web page comprising a processing circuitry, wherein the processing circuitry is configured to receive a web page with a non-graphic interaction functionality, wherein the web page comprises a code created by a first markup language configured to display the web page content to a user, and a non-graphic interaction code created by a second markup language, configured to provide the non-graphic interaction functionality to the web page; 19.5.2022 280842/ introduce the one or more interaction commands to the user; perform a non-graphic interaction between the user and the web page by receiving a non-graphic interactive command of the one or more interactive commands and provide a response based on the non-graphic interaction command. [020] For example, the processing circuitry is configured to: with the use of the second code, parse a non-graphic interaction code of the first markup language and generate a parsed code; and identify one or more components in the parsed code configured to provide a visual interaction functionality to interact with the web page. [021] For example, the processing circuitry is configured to: generate a command console wherein the command console is configured to run the one or more non-graphic interaction commands. [022] For example, wherein the second markup language comprises one of an Extensible Markup Language (XML), a JavaScript Object Notation (JSON), a web command markup language (WCML). [023] For example, wherein the first language and the second language comprises a same language. [024] For example, wherein the first code is written on the body of the web page and the second code is written on the header of the web page. [025] For example, the processing circuitry comprises a parser configured to parse the web page code and identify in a parsed code one or more interaction components. [026] For example, the processing circuitry comprises an interaction engine configured to identify one or more user interaction commands based on the one or more interaction components. [027] For example, the interaction engine comprises a text to speech engine configured to convert the one or more interacting commands into one or more voice commands for interacting with the user and/or a machine. [028] For example, the interaction engine comprises a speech-to-text engine configured to convert the one or more voice commands into the text to be used for interaction with the user and/or machine. [029] It is understood that the present disclosure described a solution for shortcomings in the field of the art. More specifically, the embodiments 19.5.2022 280842/ described herein enable interaction with a web page by using non-graphic commands. 280842/ BRIEF DESCRIPTION OF THE DRAWING id="p-30" id="p-30" id="p-30" id="p-30"
id="p-30"
[030] FIG. 1 illustrates a block diagram of an apparatus to interact with a web page, according to some demonstrative embodiments. [031] FIG. 2 illustrates a flow chart of a method to generate commands for interacting with a web page, according to some demonstrative embodiments. [032] FIG. 3 illustrates a block diagram of a system to interact with a web page, according to some demonstrative embodiments. [033] FIG. 4 illustrates a product of manufacture, according to some demonstrative embodiments. 280842/ DETAILED DESCRIPTION id="p-34" id="p-34" id="p-34" id="p-34"
id="p-34"
[034] In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of some embodiments. However, it will be understood by persons of ordinary skill in the art that some embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, units, and/or circuits have not been described in detail so as not to obscure the discussion. [035] Discussions made herein utilizing terms such as, for example, "processing," "computing," "calculating," "determining," "establishing," "analyzing," "checking," or the like, may refer to operation(s) and/or process(es) of a computer, a computing platform, a computing system, or other electronic computing devices, that manipulate and/or transform data represented as physical (e.g., electronic) quantities within the computer's registers and/or memories into other data similarly represented as physical quantities within the computer's registers and/or memories or other information storage medium that may store instructions to perform operations and/or processes. [036] The terms "plurality" and "a plurality," as used herein, include, for example, "multiple" or "two or more." For example, "a plurality of items" includes two or more items. [037] References to "one embodiment," "an embodiment," "demonstrative embodiment," "various embodiments," etc., indicate that the embodiment(s) so described may include a particular feature, structure, or characteristic, but not every embodiment necessarily includes the particular feature, structure, or characteristic. Further, repeated use of the phrase "in one embodiment" does not necessarily refer to the same embodiment, although it may. [038] As used herein, unless otherwise specified, the use of the ordinal adjectives "first," "second," "third," etc., to describe a common object merely indicate that different instances of like objects are being referred to and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or any other manner. [039] As used herein, the term "circuitry" may refer to, be part of, or include, an Application Specific Integrated Circuit (ASIC), an integrated circuit, an electronic circuit, a processor (shared, dedicated, or group), and/or memory 280842/ (shared, dedicated, or group), that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable hardware components that provide the described functionality. In some demonstrative embodiments, the circuitry may be implemented in, or functions associated with the circuitry may be implemented by one or more software or firmware modules. In some demonstrative embodiments, the circuitry may include logic, at least partially operable in hardware. [040] The term "logic" may refer, for example, to computing logic embedded in the circuitry of a computing apparatus and/or computing logic stored in a memory of a computing apparatus. For example, the logic may be accessible by a processor of the computing apparatus to execute the computing logic to perform computing functions and/or operations. In one example, logic may be embedded in various types of memory and/or firmware, e.g., silicon blocks of various chips and/or processors. Logic may be included in and/or implemented as part of various circuitry, e.g., radio circuitry, receiver circuitry, control circuitry, transmitter circuitry, transceiver circuitry, processor circuitry, and/or the like. In one example, logic may be embedded in volatile memory and/or nonvolatile memory, including random access memory, read-only memory, programmable memory, magnetic memory, flash memory, persistent memory, and the like. Logic may be executed by one or more processors using memory, e.g., registers, stuck, buffers, and/or the like, coupled to the one or more processors, e.g., as necessary to execute the logic. [041] The term "markup language," as used hereinbelow, is a system for annotating a document in a way that is syntactically distinguishable from the text. For example, when the document is processed for display, the markup language is not shown and is used to format the text. With embodiments of the below disclosure, the markup language can be used to build a web page. [042] The term "Extensible Markup Language (XML)" as used hereinbelow is a markup language that defines a set of rules for encoding documents in a format that is both human-readable and machine-readable. [043] The term "(JavaScript Object Notation (JSON)," as used hereinbelow, is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute-value 280842/ pairs and array data types. In some embodiments, JSON may serve as a replacement for XML in AJAX systems. [044] Furthermore, JSON is a language-independent data format. It was derived from JavaScript, but many modern programming languages include code to generate and parse JSON-format data. [045] The term "Hypertext Markup Language (HTML) code," as used hereinbelow is the standard markup language for documents designed to be displayed in a web browser. For example, HTML may be assisted by technologies such as Cascading Style Sheets (CSS) and/or scripting languages such as, for example, JavaScript. [046] For example, Web browsers may receive HTML documents from a web server and/or from local storage. The web server may render the documents into multimedia web pages. [047] In some demonstrative embodiments, HTML elements may be the building blocks of HTML pages. With HTML constructs, images and other objects such as, for example, interactive forms may be embedded into the rendered page. For example, HTML may embed programs written in a scripting language such as, for example, JavaScript, which may affect the behavior and content of web pages. [048] The term "web server" as used hereinbelow can be implemented in software and/or hardware and/or as a combination of software and hardware. The web server may be dedicated to running software that can satisfy client requests on the World Wide Web. A web server may include one or more websites. A web server may process incoming network requests over HTTP and several other related protocols. [049] For example, the primary function of a web server is to store, process, and deliver web pages to clients. The communication between client and server takes place using, for example, HTTP. Pages delivered are most frequently HTML documents, which may include images, style sheets, and/or scripts in addition to the text content. [050] The term "command(s)" as used hereinbelow is a means that the user and or a machine may use to interact with the web page. For example, a command is a directive to a computer program to perform a specific task supported by a web page. 280842/ id="p-51" id="p-51" id="p-51" id="p-51"
id="p-51"
[051] The term "visual interaction," as used hereinbelow, is related to interaction with a web page by using a visual element, e.g., a graphic element, on the web page, that the user can click on, press with a keyboard and/or use a voice command that is represented by the visual element. [052] The term "non-graphic interaction," as used hereinbelow, is related to interaction with a web page using one or more interaction commands that not be represented by the graphic element on the web page and execute supported actions by the web page. [053] The term "Web Command Markup Language (WCML)," as used hereinbelow, is a markup language that defines at least a set of none-graphic commands supported by a webpage. The markup language may include a flow of commands that depend on each other, a set of accepted arguments by each command, and a reference to operational code to be executed. The WCML may add non-graphic functionality to the web page, such as, for example, enabling full-screen video playing; connect and/or enable registration to the web page through external web pages, such as, for example, Facebook, Google, etc.; generate a communication form that includes the user name and email address which saved with the client; may interact with voice assistance; save the parameters that were used to the next use with indication to the place of where those parameters are saved, may interact with the web page by gesture and the like. [054] For example, the WCML may include but is not limited to code that enables to perform authentication by voice. The middleware may offer to save once a pair of a username and password and/or obtain an access token from a 3rd party authentication service in order to offer login again with the saved user credentials. [055] Furthermore, the WCML may include, but not be limited to, code that enables a required token to avoid force command. For example, to avoid automatic and not authenticated commands, the middleware may request to include a unique token generated for the current session in the command arguments. Thus, web-bots and/or a not authenticated user will be forbidden, e.g., Cross-site request forgery (CSRF) attack. 280842/ id="p-56" id="p-56" id="p-56" id="p-56"
id="p-56"
[056] The WCML may include, but not be limited to, code that enables saving a current order for re-order later. Because WCML is a command-oriented language, the final command with the final arguments may be saved for re-execution later. For example, save the current order details for re-order later with the same arguments. [057] The WCML may include, but not be limited to, commands that can be used to automate a process of re-taking screenshots of a webpage, even after the content and/or the functionality code of the web page has been changed while keeping the original meaning of the screenshot. [058] For example, to re-take an updated screenshot of a web application in a specific situation, e.g., after adding a new task with the status "In Progress." [059] In some demonstrative embodiments, the WCML code may be on the header part of the web page, and the HTML code may be on the body part of the web page. [060] In some demonstrative embodiments, the WCML code may be on another URL referenced from the first page, e.g., link to another page and/or file. [061] Advantageously, in the current state of the art, in order to automate a process that takes a webpage screenshot in a specific state (not as loaded by the URL), it's needed that the HTML or the Javascript of the web application will be the same as the time the original screenshot was taken. But with the WCML commands method, the automation system can re-run the same commands and re-take a screenshot that presents the same situation, although the HTML and the JavaScript functionality were changed. [062] It should be understood that WCML is a name and/or a title that is used to describe the web command markup language has it described above. Other names and/or titles may be used for the web command markup language described above. [063] Embodiments of the discloser may include a markup language and a protocol that is designed for non-graphic interaction with a web page. A method to create web pages that include a markup language to describe supported actions and interactions on a specific web page, a method to extend the browser functionality in order to read these actions and make them be accessible in many ways for the end-user, a browser extension and/or an independent software that 19.5.2022 280842/ may read the content of the web page to a user and ask him what to do, without reading the full web page. [064] For example, a user may run these actions by voice, voice assistants, and/or other bots that may interact with the web page without the need for a software development kit (SDK) embedded in the web page. [065] In some demonstrative embodiments, the user may run action by a set of visual gestures, for example, waving the hand in front of the device camera. [066] For example, the WCLM code below may demonstrate a dialog between an end-user, e.g., a visitor, and the web page. " [067] Advantageously, embodiments of the disclosure enable a dialog between the web page and the user, e.g., a visitor, by the use of non-graphical communication and/or commands, such as, for example, voice, text, gesture and etc. Furthermore, the use of non-graphical communication and/or commands may enable men, women, and/or children with disabilities to interact with the web page, for example, heavy-sighted. [068] For example, the web page may be presented to the visitor in one way and for heavy-sighed visitors in another way. For example, the web page may be adapted to heavy-sited visitors. In addition, embodiments of the disclosure 280842/ enable a dialog between the web page and the user, e.g., a visitor, without the need to see the web page, e.g., when the communication device is hidden. [069] Referring first to Figure 1, which is an illustration of a block diagram of an apparatus 100 to interact with a web page 140, according to some demonstrative embodiments. [070] In some demonstrative embodiments, apparatus 100 may be configured to build the web page 140. Apparatus 100 may include processing circuitry 110, an instructor 120 configured to interact with one or more web pages, for example, web page 140, and a web page generator 130 configured to generate the one or more web pages. [071] In some demonstrative embodiments, web page 140 may include a first web page code 150. For example, the first web page code 150 may be the web page code of web page 140 and may include one or more visual and/or graphic commands that a user and/or a machine may use to interact with the web page by clicking on them with, for example, a keyboard, a mouse, a pen, a finger, or the like. For example, the one or more visual and/or graphic commands may be images, text, logos, video, or the like. [072] In some demonstrative embodiments, web page 140 may include a second web page code 160, e.g., WCML markup as a second web code 160. For example, the second web page code 160, e.g., additional web code, may include a parser 162, an interaction engine 166, and one or more non-graphic commands 164. [073] In some demonstrative embodiments, parser 162 may be configured to receive the first web page code 140, e.g., the web page code, and to generate the one or more non-graphic commands 164. For example, the one or more non-graphic commands 164 may include voice commands, text commands, number commands, and the like. [074] In some demonstrative embodiments, interaction engine 166 may be configured to generate one or more none-graphic commands based on the one or more interaction components. [075] In some demonstrative embodiments, interaction engine 166 may include a gesture engine 123 operably coupled to a camera 125, e.g., a video camera. For example, gesture engine 123 may detect in one or frames received 280842/ from camera 125 one or more gestures and may convert the one or more gestures to a non-graphic command, e.g., text command. [076] In some demonstrative embodiments, interaction engine 166 may include a speech-to-text (STT) engine 126 operably coupled to a microphone 124. For example, STT engine 126 may convert a voice command of a user to a text command 122 for web page 140. [077] In some demonstrative embodiments, interaction engine 166 may include a text-to-speech (TTS) engine 128 operably coupled to a speaker 127. For example, TTS engine 128 may convert a text of the web page to a voice signal in order to respond to the user command, if desired. [078] In some demonstrative embodiments, interactor 120 may be used to manage the interaction between the user and/or the machine with web page 140. [079] In some demonstrative embodiments, interactor 120 may be implemented by software and/or by hardware and/or a combination of software and hardware. [080] In some demonstrative embodiments, apparatus 100 may include a memory 170 configured to store software and/or a markup language 175, which may be used by a web page generator to build the web page 140. [081] For example, memory 170 may include any type of memory, such as, for example, RAM, DRAM, ROM, programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), Flash memory, a hard disk drive (HDD), a solid-state disk drive (SDD), fusion drive, a nonvolatile memory, volatile memory and the like. [082] For example, the markup language 175 may include a plurality of markup languages, such as, for example, HTML, XML, JSON, WCML, and the like. [083] In some demonstrative embodiments, web page generator 130 may be implemented by software and/or by hardware and/or a combination of software and hardware. [084] In some demonstrative embodiments, the processing circuitry 110 may be configured to add a non-graphic interaction functionality to the web page 1by adding to the first code 150 created by a first markup language a second code 160 created by a second markup language. For example, the first markup language may include HTML, XML, JSON, Javascript, and the like. The second 280842/ markup language may include WCML and the like. It should be understood that the second markup language may include similar WCML languages. [085] In some demonstrative embodiments, with the use of the second code 160, the processing circuitry 110 may be configured to parse a code, e.g., the first code 150, of the first markup language and generate a parsed code. [086] In some demonstrative embodiments, with the use of the second code 160, the processing circuitry 110 may be configured to identify in the parsed code one or more components, e.g., visual commands 155, configured to provide a visual interaction functionality to interact with the web page 140. [087] For example, parser 162 may be configured to parse the first code and identify in a parsed code one or more interaction components. [088] In some demonstrative embodiments, with the use of the second code 160, the processing circuitry 110 may be configured to generate one or more commands, e.g., non-graphic commands 164, configured to provide the non-graphic interaction functionality to web page 140 based on the one or more components, e.g., non-graphic commands 164. [089] In some demonstrative embodiments, with the use of the second code 160, the processing circuitry 110 may be configured to initiate an interaction with the web page 140, e.g., order pizza, when receiving a command of the one or more commands, e.g., non-graphic commands 164. [090] In some demonstrative embodiments, the processing circuitry 110 may be configured to generate, for example, a command console (not shown), wherein the command console may include the one or more commands, e.g., non-graphic commands 164. [091] In some other demonstrative embodiments, a user may type the one or more commands directly on the web page 140. It should be understood that other non-graphic methods may be used to interact with web page 140. [092] In some demonstrative embodiments, interaction engine 166 may include the TTS engine 128. The TTS engine 128 may be configured to convert the one or more commands into one or more voice commands, e.g., audio signals, to be used by a user and/or a machine. For example, "Pizza," "Olives," "Cola" and the like [093] In some demonstrative embodiments, interaction engine 166 may include STT engine 126. The STT engine 126 may be configured to convert the 280842/ one or commands into one or more text codes to be used by web page 140 to perform the one or more voice commands. For example, the voice command "call 199" may be converted to a text code that causes web page 140 to initiate a phone call to 199. [094] Reference is now made to FIG. 2, which illustrates a flow chart of a method 200 to generate commands for interacting with a web page, according to some demonstrative embodiments. [095] In some demonstrative embodiments, method 200 may be executed by a markup language client, for example, by a WCML client. The markup language client may receive a web page code of a selected web page (text box 210). [096] In some demonstrative embodiments, the markup language client may include a parser, e.g., parser 162 (FIG. 1). The parser may parse the web page code and may provide a parsed code (text box 220). The parser may identify one or more commands components, e.g., graphic, e.g., visual and non-graphic commands components, in the parsed code (text box 230). [097] In some demonstrative embodiments, the markup language client may include a TTS engine, e.g., TTS engine 128 (FIG.1), which may be configured to identify one or more non-graphic commands (text box 240). The one or more non-graphic commands may be used to interact with the web page, e.g., web page 140 (FIG. 1) (text box 250). [098] In some demonstrative embodiments, the non-graphic HTML code may be located on the header of the web page and the HTML code may be on the body of the web page, e.g., web page 140 (FIG. 1). [099] For example, the below code may be presented on the header of the web page: 19.5.2022 280842/ id="p-100" id="p-100" id="p-100" id="p-100"
id="p-100"
[0100] With the above example of HTML code, the visitor of the web page may not see the effect of this code on the web page, but he may hear the web saying: "Welcome to the website of the Israel Patent Office, and we will immediately present you with the options for action on the website. What is your selection?" "Call us" "We can call you" "What is your name?" "To which phone number do you want us to call you?" [0101] Another example of WCML code: 19.5.2022 280842/ id="p-102" id="p-102" id="p-102" id="p-102"
id="p-102"
[0102] The above exemplary code on the header of the web page, e.g., web page 140, may enable one user to interact with the page by using, for example, a mouse, and another user may interact with the web page by using, for example, voice, text, gesture and the like. [0103] Reference is now made to FIG. 3, which illustrates a block diagram of a system 300 to interact with a web page, according to some demonstrative embodiments. [0104] In some demonstrative embodiments, system 300 may include a website HTML 301. The website HTML 301 may include WCML markup module 302. The WCML markup module 302 may include, for example, Javascript operational code 308, which may be configured to operate code on the webpage 301. [0105] In some demonstrative embodiments, system 300 may further include a markup language client 320, for example, a WCML client. For example, the markup language client 320 may be a browser, voice assistance and/or a test runner, and the like. It should be understood that a markup language client 3is not limited to the above examples, and other use cases may be implemented to the markup language client 320. [0106] In some demonstrative embodiments, the markup language client 3may include a parser 322, e.g., WCML parser, an output module 324, e.g., WCML output module, and an input/runner module 326, e.g., WCML input/runner module. The output module 324 may include a TTS engine 325, and the input/runner module may include an STT engine 328. The output module 324 may output a visual instruction and/or a voice instruction to a user
Claims (19)
1. A product comprising one or more tangible computer-readable non-transitory storage media comprising program instructions for generating one or more commands for interacting with a web page, wherein execution of the program instructions by one or more processors comprising: receiving a web page with a non-graphic interaction functionality, wherein the web page comprises a code created by a first markup language configured to display the web page content to a user, and a non-graphic interaction code created by a second markup language, configured to provide the non-graphic interaction functionality to the web page; introducing the one or more interaction commands to the user; performing a non-graphic interaction between the user and the web page by receiving a non-graphic interaction command of the one or more interaction commands; and providing a response based on the non-graphic interaction command.
2. A product comprising one or more tangible computer-readable non-transitory storage media comprising program instructions for generating one or more commands for interacting with a web page, wherein execution of the program instructions by one or more processors comprising: receiving a web page with a non-graphic interaction functionality, wherein the web page comprises a code created by a markup language, configured to display the web page content to a user, and a non-graphic interaction code created by a web command markup language (WCML); configured to provide the non-graphic interaction functionality to the web page; introducing the one or more interacting commands to the user ; performing a non-graphic interaction between the user and the web page by receiving a non-graphic interaction command of the one or more interaction commands; and providing a response based on the non-graphic interaction command.
3. The product of claim 1 or 2, wherein execution of the program instructions by 280842/ one or more processors comprising: with the use of the non-graphic interaction code, parsing the web page code and generating a parsed code; and identifying in the parsed code one or more components configured to provide the graphic interaction functionality to the web page.
4. The product of claim 1 or 2, wherein execution of the program instructions by one or more processors comprising: generating a command console wherein the command console is configured to run the one or more non-graphic interaction commands.
5. The product of claim 1 , wherein the second markup language comprises one of an Extensible Markup Language (XML), a JavaScript Object Notation (JSON), a web command markup language (WCML).
6. The product of claim 2 wherein the markup language comprises WCML.
7. The product of claim 1, wherein the processing circuitry is configured to control with the second markup language a parser configured to parse the code and identify in a parsed code one or more interaction components.
8. The product of claim 1, wherein the processing circuitry is configured to control with the second markup language, comprises an interaction engine and execution of the program instructions of the interaction engine comprises: identifying the one or more interaction commands based on the one or more interaction components.
9. The product of claim 8, wherein the interaction engine comprises a text to speech engine and execution of the program instructions of the text to speech engine comprises: converting the one or more interaction commands into one or more voice commands to be used by the user.
10. The product of claim 9, wherein the interaction engine comprises a speech to text engine and execution of the program instructions of the speech to text engine, comprises: converting the one or more voice commands into text commands to be used by 280842/ the web page for interaction with the user by texting.
11. An apparatus for interacting with a web page comprising a processing circuitry, wherein the processing circuitry is configured to: receive a web page with a non-graphic interaction functionality, wherein the web page comprises a code created by a first markup language configured to display the web page content to a user, and a non-graphic interaction code created by a second markup language, configured to provide the non-graphic interaction functionality to the web page; introduce the one or more interaction commands to the user; perform a non-graphic interaction between the user and the web page by receiving a non-graphic interactive command of the one or more interactive commands; and provide a response based on the non-graphic interaction command.
12. The apparatus of claim 11, wherein the processing circuitry is configured to: with the use of the second code, parse the non-graphic interaction code and generate a parsed code; and identify one or more components in the parsed code which are configured to provide the graphic interaction functionality to the web page.
13. The apparatus of claim 11, wherein the processing circuitry is configured to: generate a command console wherein the command console is configured to run the one or more non-graphic interaction commands.
14. The apparatus of claim 11, wherein the second markup language comprises one of an Extensible Markup Language (XML), a JavaScript Object Notation (JSON), a web command markup language (WCML).
15. The apparatus of claim 11, wherein the first code is written on the body of the web page and the second code is written on the header of the web page.
16. The apparatus of claim 11, wherein the processing circuitry comprises a parser configured to parse the web page code and identify in a parsed code one or more interaction components.
17. The apparatus of claim 11, wherein the processing circuitry comprises an interaction engine which is configured to identify one or more interaction commands 19.5.2022 280842/ based on the one or more interacting interaction components.
18. The apparatus of claim 17, wherein the interaction engine comprises a text to speech engine configured to convert the one or more interaction commands into one or more voice commands to be used by a user.
19. The apparatus of claim 17 wherein the interaction engine comprises a speech to text engine configured to convert the one or more voice commands into text to be used by the web page for interaction with the user.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IL280842A IL280842A (en) | 2021-02-14 | 2021-02-14 | Apparatus system and method of interacting with a web page |
PCT/IL2022/050170 WO2022172272A1 (en) | 2021-02-14 | 2022-02-13 | Apparatus system and method of interacting with a web page |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IL280842A IL280842A (en) | 2021-02-14 | 2021-02-14 | Apparatus system and method of interacting with a web page |
Publications (1)
Publication Number | Publication Date |
---|---|
IL280842A true IL280842A (en) | 2022-07-01 |
Family
ID=82611050
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
IL280842A IL280842A (en) | 2021-02-14 | 2021-02-14 | Apparatus system and method of interacting with a web page |
Country Status (2)
Country | Link |
---|---|
IL (1) | IL280842A (en) |
WO (1) | WO2022172272A1 (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0992980A2 (en) * | 1998-10-06 | 2000-04-12 | Lucent Technologies Inc. | Web-based platform for interactive voice response (IVR) |
US20050229048A1 (en) * | 2004-03-30 | 2005-10-13 | International Business Machines Corporation | Caching operational code in a voice markup interpreter |
US20120317475A1 (en) * | 2011-05-12 | 2012-12-13 | Qualcomm Incorporated | Concurrent Parsing and Processing of Serial Languages |
US20140040722A1 (en) * | 2012-08-02 | 2014-02-06 | Nuance Communications, Inc. | Methods and apparatus for voiced-enabling a web application |
WO2014189987A1 (en) * | 2013-05-21 | 2014-11-27 | Microsoft Corporation | Method for finding elements in a webpage suitable for use in a voice user interface (disambiguation) |
US20160196111A1 (en) * | 2014-03-07 | 2016-07-07 | Paypal, Inc. | Interactive voice response interface for webpage navigation |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10444934B2 (en) * | 2016-03-18 | 2019-10-15 | Audioeye, Inc. | Modular systems and methods for selectively enabling cloud-based assistive technologies |
-
2021
- 2021-02-14 IL IL280842A patent/IL280842A/en unknown
-
2022
- 2022-02-13 WO PCT/IL2022/050170 patent/WO2022172272A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0992980A2 (en) * | 1998-10-06 | 2000-04-12 | Lucent Technologies Inc. | Web-based platform for interactive voice response (IVR) |
US20050229048A1 (en) * | 2004-03-30 | 2005-10-13 | International Business Machines Corporation | Caching operational code in a voice markup interpreter |
US20120317475A1 (en) * | 2011-05-12 | 2012-12-13 | Qualcomm Incorporated | Concurrent Parsing and Processing of Serial Languages |
US20140040722A1 (en) * | 2012-08-02 | 2014-02-06 | Nuance Communications, Inc. | Methods and apparatus for voiced-enabling a web application |
WO2014189987A1 (en) * | 2013-05-21 | 2014-11-27 | Microsoft Corporation | Method for finding elements in a webpage suitable for use in a voice user interface (disambiguation) |
US20160196111A1 (en) * | 2014-03-07 | 2016-07-07 | Paypal, Inc. | Interactive voice response interface for webpage navigation |
Also Published As
Publication number | Publication date |
---|---|
WO2022172272A1 (en) | 2022-08-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9021367B2 (en) | Metadata capture for screen sharing | |
US9690854B2 (en) | Voice-enabled dialog interaction with web pages | |
US9584504B2 (en) | Auto login method and device | |
US9177551B2 (en) | System and method of providing speech processing in user interface | |
CN103152614B (en) | Second display is used to carry out the system and method across service search of voice driven | |
US20140122618A1 (en) | User-aided learning chatbot system and method | |
US9122848B2 (en) | Authentication of user interface elements in a web 2.0 environment | |
US9807224B2 (en) | Method and apparatus for accessing services of a device | |
US9916128B2 (en) | Visual and voice co-browsing framework | |
US20170277703A1 (en) | Method for Displaying Webpage and Server | |
WO2015043442A1 (en) | Method, device and mobile terminal for text-to-speech processing | |
CN105765487A (en) | Execution and display of events in web browsers | |
CN101707627A (en) | Method and device for presenting page information | |
US9342386B1 (en) | Messaging channel for web pages, extensions, and applications to communicate | |
Rajput et al. | The Way to Make Blind People Use the Email System: Voice Based Email Generating System Using Artificial Intelligence | |
US10142446B2 (en) | Dialog server | |
US11238754B2 (en) | Editing tool for math equations | |
US8041839B2 (en) | Method and system of providing active web user interface | |
IL280842A (en) | Apparatus system and method of interacting with a web page | |
CN110088750B (en) | Method and system for providing context function in static webpage | |
GB2478767A (en) | Accessing services of a device | |
JP6664536B1 (en) | Web form input support system | |
CN111968630B (en) | Information processing method and device and electronic equipment | |
CN114629955A (en) | Identity authentication method, identity authentication equipment and computer readable storage medium | |
US11770437B1 (en) | Techniques for integrating server-side and client-side rendered content |