CN112308075B - Electronic device, method, apparatus, and medium for recognizing text - Google Patents

Electronic device, method, apparatus, and medium for recognizing text Download PDF

Info

Publication number
CN112308075B
CN112308075B CN202010118376.8A CN202010118376A CN112308075B CN 112308075 B CN112308075 B CN 112308075B CN 202010118376 A CN202010118376 A CN 202010118376A CN 112308075 B CN112308075 B CN 112308075B
Authority
CN
China
Prior art keywords
electronic device
camera
text
mode
state information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010118376.8A
Other languages
Chinese (zh)
Other versions
CN112308075A (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010118376.8A priority Critical patent/CN112308075B/en
Publication of CN112308075A publication Critical patent/CN112308075A/en
Application granted granted Critical
Publication of CN112308075B publication Critical patent/CN112308075B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses electronic equipment, a method and a device for recognizing texts. One embodiment of the electronic device includes: a built-in sensor configured to generate installation status information, wherein the installation status information is used to indicate that the electronic device is in a fixed mode or a wearable mode; the camera is configured to acquire a text image to be recognized; a processor configured to wake up the camera in a manner matching the installation state information; and the communication module sends the text image to be recognized to the target device so as to enable the target device to generate a text recognition result. The implementation mode realizes flexible switching of the text recognition device between the fixed mode and the wearable mode in an accessory mode, and gives consideration to energy consumption requirements and use requirements through different awakening modes.

Description

Electronic device, method, apparatus, and medium for recognizing text
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to electronic equipment, a method and a device for recognizing texts.
Background
With the rapid development of computer technology, image processing (image processing) technology has also been widely used.
In the field of computer-aided teaching, a reading pen, a swipe pen, and the like are usually used to read contents in a book by means of an image processing technology and a voice recognition technology.
Disclosure of Invention
The embodiment of the application provides electronic equipment, a method, a device and a medium for recognizing texts.
In a first aspect, an embodiment of the present application provides an electronic device for recognizing text, where the electronic device includes: a built-in sensor configured to generate installation status information, wherein the installation status information is used to indicate that the electronic device is in a fixed mode or a wearable mode; the camera is configured to acquire a text image to be recognized; the processor is configured to wake up the camera in a mode matched with the installation state information; and the communication module is configured to send the text image to be recognized to the target device so as to enable the target device to generate a text recognition result.
In some embodiments, the processor is further configured to: determining whether the camera is in a fixed mode or a wearable mode based on the installation state information; in response to determining that the camera is in the fixed mode, detecting a wake-up instruction; and responding to the detected awakening instruction, and awakening the camera.
In some embodiments, the processor is further configured to: in response to determining that the camera is in the wearable mode, waking up the camera.
In some embodiments, the electronic device further comprises a built-in battery.
In some embodiments, the installation state information is further used for indicating whether the electronic equipment is electrically connected with the chargeable base; and the processor is further configured to: in response to determining that the installation status information indicates that the electronic device is electrically connected with the rechargeable dock, sending information characterizing permission to charge the internal battery.
In some embodiments, the processor is further configured to: in response to determining that the camera is in the wearable mode, sending information characterizing enablement of the built-in battery power.
In some embodiments, the built-in sensor comprises an infrared sensor; and the built-in sensor is further configured to: in response to determining that the human body part is detected within the preset sensing range, generating installation state information indicating that the camera is in a wearable mode.
In a second aspect, an embodiment of the present application provides a method for recognizing text, where the method is applied to an electronic device as described in any implementation manner of the first aspect, and the method includes: acquiring installation state information of the electronic equipment, wherein the installation state information is used for indicating that the electronic equipment is in a fixed mode or a wearable mode; awakening a camera of the electronic equipment in a mode of being matched with the installation state information; acquiring a text image to be recognized; and the sending unit is configured to send the text image to be recognized to the target device so as to enable the target device to generate a text recognition result.
In some embodiments, the waking up the camera of the electronic device in a manner matching with the installation state information includes: determining whether the camera is in a fixed mode or a wearable mode based on the installation state information; in response to determining that the camera is in the fixed mode, detecting a wake-up instruction; and responding to the detected awakening instruction, and awakening the camera.
In some embodiments, the waking up the camera of the electronic device in a manner of matching with the installation state information includes: and responding to the determination that the camera is in the wearable mode, and waking up the camera.
In some embodiments, the installation state information is further used for indicating whether the electronic equipment is electrically connected with the chargeable base; and the method further comprises: in response to determining that the installation state information indicates that the electronic device is electrically connected to the rechargeable dock, sending information indicative of permission to charge a built-in battery of the electronic device.
In some embodiments, the method further comprises: in response to determining that the camera is in the wearable mode, sending information characterizing enabling a built-in battery power supply of the electronic device.
In a third aspect, an embodiment of the present application provides an apparatus for recognizing a text, where the apparatus is applied to an electronic device as described in any implementation manner of the first aspect, and the apparatus includes: a status acquisition unit configured to acquire installation status information of the electronic device, wherein the installation status information is used for indicating that the electronic device is in a fixed mode or a wearable mode; the wake-up unit is configured to wake up the camera of the electronic equipment in a manner matched with the installation state information; an image acquisition unit configured to acquire a text image to be recognized; and the sending unit is configured to send the text image to be recognized to the target device so as to enable the target device to generate a text recognition result.
In some embodiments, the wake-up unit comprises: a detection module configured to determine whether the camera is in a fixed mode or a wearable mode based on the installation state information; in response to determining that the camera is in the fixed mode, detecting a wake-up instruction; and the awakening module is configured to awaken the camera in response to detecting the awakening instruction.
In some embodiments, the wake-up unit is further configured to wake up the camera in response to determining that the camera is in a wearable mode.
In some embodiments, the installation state information is further used for indicating whether the electronic equipment is electrically connected with the chargeable base; and the apparatus further comprises: a charging indication unit configured to transmit information indicating that charging of a built-in battery of the electronic device is permitted in response to determining that the installation state information indicates that the electronic device is electrically connected to the chargeable dock.
In some embodiments, the apparatus further comprises: and the power supply indicating unit is configured to send information for enabling the built-in battery power supply of the electronic equipment in response to the fact that the camera is determined to be in the wearable mode.
In a fourth aspect, the present application provides a computer-readable medium, on which a computer program is stored, which when executed by a processor implements the method described in any implementation manner of the second aspect.
According to the electronic equipment, the method, the device and the medium for recognizing the text, the installation state information is generated through the built-in sensor, wherein the installation state information is used for indicating that the electronic equipment is in a fixed mode or a wearable mode; then, the processor wakes up the camera in a mode of being matched with the installation state information; next, the camera acquires a text image to be recognized; and finally, the communication module sends the text image to be recognized to the target device so that the target device generates a text recognition result. Therefore, flexible switching of the text recognition device between a fixed mode and a wearable mode is achieved in an accessory mode, and energy consumption requirements and use requirements are taken into account through different awakening modes.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present application may be applied;
FIG. 2 is a timing diagram of interactions between various devices in an embodiment of an electronic apparatus for recognizing text according to the present application;
FIG. 3 is a schematic diagram of one application scenario of an electronic device for recognizing text in accordance with an embodiment of the present application;
FIG. 4 is a flow diagram of one embodiment of a method for recognizing text as applied to an electronic device of the present application;
FIG. 5 is a schematic diagram illustrating the structure of one embodiment of an apparatus for recognizing text according to the present application;
FIG. 6 is a schematic block diagram of an electronic device suitable for use in implementing embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present application, the embodiments and features of the embodiments may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 illustrates an exemplary architecture 100 in which an electronic device for recognizing text to which the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, a network 103, and a server 104. The network 103 serves as a medium for providing communication links between the terminal devices 101, 102 and the server 104. Network 103 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The terminal devices 101, 102 may interact with the server 104 via the network 103 to receive or send messages or the like. The terminal apparatuses 101 and 102 may be hardware or software. When the terminal devices 101 and 102 are hardware, the terminal device 101 may be a portable camera, and the terminal device 102 may be various electronic devices for assisting in character recognition, such as an intelligent desk lamp used in cooperation with the terminal device 101. The terminal device 101 may include a built-in sensor, a camera, and a processor. When the terminal apparatuses 101 and 102 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 104 may be a server that provides various services, such as a background server that provides support for the terminal device 101 to generate text recognition results. The background server may analyze and process the text image to be recognized acquired by the terminal device 101, generate a processing result, and feed back the processing result (such as a text recognition result) to the terminal device 101.
It should be noted that, optionally, the terminal device 102 may also directly perform analysis processing on the text image to be recognized acquired by the terminal device 101 to generate a text recognition result. At this time, the network 103 and the server 104 may not exist.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the electronic device for recognizing text provided in the embodiment of the present application is generally the terminal 101. The method for recognizing the text provided by the embodiment of the application is generally executed by the terminal device 101, and accordingly, the device for recognizing the text is generally arranged in the terminal device 101.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a timing diagram 200 of interactions between various devices in one embodiment of an electronic device for recognizing text in accordance with the present application is shown. The electronic device for recognizing text may include: built-in sensor, camera, treater and communication module. Wherein the built-in sensor may be configured to generate installation state information. The camera can be configured to acquire a text image to be recognized. The processor may be configured to wake up the camera in a manner matching the installation state information. The communication module may be configured to send the text image to be recognized to the target device, so that the target device generates a text recognition result.
As shown in fig. 2, in step 201, the built-in sensor generates mounting state information.
In the present embodiment, the built-in sensor in the electronic apparatus for recognizing text (such as the terminal apparatus 101 shown in fig. 1) can generate the installation state information in various ways. The installation state information may be used to indicate that the electronic device is in a fixed mode or a wearable mode. The fixed mode is used for representing that the electronic equipment is fixed at a certain position. By way of example, the electronic device may be fastened to a mating device (e.g., a stand or a desk lamp) by a snap or screw connection. As yet another example, the electronic device described above may also be used as a wearable device, either hand-held or worn.
In some optional implementations of the embodiment, the electronic device for recognizing text may further include a connection component. The connecting member may be used for fixing (e.g., snap, screw thread) or wearing (e.g., watch band).
In this embodiment, the built-in sensor may include, but is not limited to, at least one of: acceleration sensor, magnetic force sensor, direction sensor, gyroscope sensor, gravity sensor. The built-in sensors can be used for acquiring acceleration values of three axes x, y and z, environmental magnetic field data of three axes x, y and z, angle data of three axes x, y and z (such as an elevation angle, a yaw angle and a roll angle), angular acceleration of three axes x, y and z and gravity data. As an example, the built-in sensor in the electronic device described above may include a gravity sensor. The built-in sensor may generate the installation state information according to an acceleration change frequency and amplitude. Specifically, in response to determining that the frequency and magnitude of acceleration changes within a preset time period exceed preset thresholds, the built-in sensor may generate installation status information that characterizes the electronic device as being in a wearable mode. As still another example, the built-in sensor in the electronic device may further include a voltage sensor for detecting a voltage of the charging port of the electronic device. Specifically, in response to determining that the voltage detected by the voltage sensor indicates that the charging port is connected to the charger, the built-in sensor may generate installation status information indicating that the electronic device is in the stationary mode.
In some optional implementations of this embodiment, the built-in sensor may include an infrared sensor. In response to determining that the human body part is detected within the preset sensing range, the infrared sensor may generate installation state information indicating that the electronic device is in a wearable mode. The preset sensing range is generally a working range of the infrared sensor. The built-in sensor is usually provided at a position where the built-in sensor contacts the skin when the user holds or wears the electronic device.
In some optional implementations of this embodiment, the installation state information may also be used to indicate whether the electronic device is electrically connected to the chargeable base. For example, a voltage sensor built in the electronic device may detect a voltage at a charging port of the electronic device, and generate the installation state information according to whether the voltage reaches a rated value.
In step 202, the processor wakes up the camera in a manner matching the installation state information.
In this embodiment, the processor in the electronic device may wake up the camera in the electronic device in a manner matching the installation state information generated in step 201 according to a preset correspondence table between the installation state information and the wake-up manner.
In some optional implementations of this embodiment, the step 202 may further include:
at step 2021, it is determined whether the camera is in the fixed mode or the wearable mode based on the installation state information.
In these implementations, since the installation state information may be used to indicate that the electronic device is in a fixed mode or a wearable mode, the execution main body may determine whether the camera is in the fixed mode or the wearable mode according to the installation state information.
Step 2022, in response to determining that the camera is in the fixed mode, detecting a wake-up instruction.
In these implementations, a processor of the electronic device may detect the wake-up instruction in response to determining that a camera of the electronic device is in a stationary mode. The wake-up instruction may include various forms, which may include, but are not limited to, at least one of: wake-up word, touch operation, direction and distance of slide operation.
Step 2023, in response to detecting the wake-up instruction, waking up the camera.
In these implementations, in response to detecting the wake-up instruction, the processor of the electronic device may wake up the camera in the electronic device in various ways to make the camera in an operating state.
Based on the optional implementation manner, optionally, the electronic device may further include a built-in battery.
Based on the optional implementation manner, optionally, in response to determining that the installation state information generated in step 201 indicates that the electronic device is electrically connected to the rechargeable base, the processor of the electronic device may further transmit information indicating that the internal battery is allowed to be charged. The rechargeable base can comprise a power supply device matched with the electronic device, and can comprise various devices capable of providing a power interface, such as the desk lamp capable of being matched for use.
Based on the optional implementation manner, the electronic device can be fixed on a matched device (such as a desk lamp) to realize text recognition while charging.
In some optional implementations of the embodiment, the processor may wake up the camera in response to determining that the camera is in the wearable mode. The wearable mode can be used for representing that the electronic equipment is used as wearable equipment in a handheld or wearing mode. So that the camera can be woken up without a wake-up instruction in the wearable mode. In the wearable mode, continuous text recognition is usually performed, so that the response speed of text recognition is increased in a manner of not requiring a wake-up instruction.
Based on the optional implementation manner, optionally, the execution main body may further include a built-in battery.
Based on the optional implementation manner, optionally, in response to determining that the camera is in the wearable mode, the processor may further send information indicating that the built-in battery power supply is enabled. Optionally, in response to determining that the camera is in the wearable mode, the processor may further open a communication connection between the electronic device and a target electronic device. The target electronic device may be an electronic device (e.g., a host device with an electronic screen or a sound device) matched with the electronic device.
In step 203, the camera acquires a text image to be recognized.
In this embodiment, the camera in the electronic device may acquire the text image to be recognized in various manners. As an example, the camera in the electronic device may directly capture an image containing a text as a text image to be recognized.
In step 204, the communication module sends the text image to be recognized to the target device, so that the target device generates a text recognition result.
In this embodiment, the communication module in the electronic device may send the text image to be recognized, acquired in step 203, to a target device. The target device may generate the text recognition result in various ways. The target device may be various devices with text recognition function, such as a background server connected in communication. As an example, the target device may employ various OCR (Optical Character Recognition) techniques to generate the text Recognition result. The OCR techniques described above may employ, for example, a pre-trained artificial neural network for recognition. The artificial Neural network may include, for example, various Convolutional Neural Networks (CNNs).
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of an electronic device for recognizing text according to an embodiment of the present application. In the application scenario of fig. 3, an electronic device (e.g., an OCR camera) 301 for recognizing text is installed on an intelligent desk lamp 302 used in cooperation therewith. Built-in sensors in the electronic device 301 may generate a signature indicating that the electronic device 301 is in a fixed mode of installation state information. The user 303 first speaks "ring to wake up the device when in use. The processor in the electronic device 301 detects a "ring bell" as a wake-up instruction, waking up the camera in the electronic device 301. Then, the user 303 opens the book 304 and places it at a suitable distance from the camera of the electronic device 301. Thereafter, the camera captures an image including text on the book 304 as a text image to be recognized. Alternatively, the user 303 may also wear the electronic device 301 on a hand, and adjust the electronic device by hand to make the camera capture an image of a text on the book 304 as a text image to be recognized. In this state, the user 303 need not wake up the electronic device 301 by speaking a wake-up word. Then, the electronic device 301 sends the text image to be recognized to a background server for recognition, so as to generate a text recognition result. Optionally, the electronic device 301 may further broadcast the generated text recognition result through voice by using a voice synthesis technology after obtaining the text recognition result from the backend server.
At present, one of the prior arts generally integrates an OCR recognition device into a point reading pen or a scanning reading pen device, so that the device can only be used by hand and cannot be flexibly applied to various application scenarios. The electronic device provided by the embodiment of the application generates the installation state information through the built-in sensor and the processor provides different awakening schemes according to the installation state information, so that the requirements of reducing energy consumption when the electronic device is in a fixed mode and timely responding to text recognition in a wearable mode are met, and the technical effect of considering both the energy consumption requirement and the use requirement is achieved.
With further reference to FIG. 4, a flow 400 of one embodiment of a method for recognizing text is shown. The flow 400 of the method for recognizing text comprises the following steps:
step 401, obtaining installation state information of the electronic device.
In the present embodiment, an execution subject of the method for recognizing a text (e.g., the terminal apparatus 101 shown in fig. 1) can acquire the installation state information of the above-described electronic apparatus in various ways. As an example, the execution main body described above may acquire the installation state information generated by the built-in sensor through a bus in the electronic device.
It should be noted that the step of generating the installation state information by the built-in sensor is consistent with the description of step 201 in the foregoing embodiment, and is not described herein again.
And 402, awakening the camera of the electronic equipment in a mode of being matched with the installation state information.
In some optional implementation manners of this embodiment, the execution main body may wake up the camera in a manner of matching with the installation state information by the following steps:
first, it is determined whether the camera is in a fixed mode or a wearable mode based on the installation state information.
And in response to determining that the camera is in the fixed mode, detecting a wake-up instruction.
And thirdly, responding to the detected awakening instruction, and awakening the camera.
Based on the above optional implementation manner, in response to determining that the installation state information indicates that the electronic device is electrically connected to the chargeable base, the execution main body may further transmit information indicating that charging of a built-in battery of the electronic device is allowed.
In some optional implementations of this embodiment, the execution subject may wake up the camera in response to determining that the camera is in the wearable mode.
Based on the optional implementation manner, in response to determining that the camera is in the wearable mode, the execution main body may further send information indicating that the built-in battery power supply of the electronic device is enabled.
And step 403, acquiring a text image to be recognized.
In this embodiment, the executing body may acquire the text image to be recognized in various ways. As an example, the execution subject may acquire the text image to be recognized acquired by the camera through a bus in the electronic device.
It should be noted that the step of the text image to be recognized by the camera is consistent with the description of step 203 in the foregoing embodiment, and is not described herein again.
Step 404, sending the text image to be recognized to the target device, so that the target device generates a text recognition result.
The steps 402 and 404 are respectively consistent with the steps 202 and 204 in the foregoing embodiment, and the above description on the steps 202 and 204 and their optional implementation manners also applies to the steps 402 and 404, which is not described herein again.
As can be seen from fig. 4, a flow 400 of the method for recognizing text in the present embodiment is implemented by first obtaining installation state information of an electronic device, where the installation state information is used to indicate that the electronic device is in a fixed mode or a wearable mode; then, awakening the camera in a mode of matching with the installation state information; next, acquiring a text image to be recognized; and finally, sending the text image to be recognized to the target equipment so as to enable the target equipment to generate a text recognition result. Therefore, the awakening mode is adjusted according to different installation states of the electronic equipment for text recognition, the energy consumption is reduced when the electronic equipment is in a fixed mode, the demand of text recognition is responded in time in a wearable mode, and the technical effect of considering both the energy consumption demand and the use demand is achieved.
With further reference to fig. 5, as an implementation of the method shown in the above figures, the present application provides an embodiment of an apparatus for recognizing a text, which corresponds to the embodiment of the method shown in fig. 4, and which can be applied to various electronic devices.
As shown in fig. 5, the apparatus 500 for recognizing text provided by the present embodiment includes a state acquiring unit 501, a waking unit 502, an image acquiring unit 503, and a transmitting unit 504. The state acquiring unit 501 is configured to acquire installation state information of the electronic device, where the installation state information is used to indicate that the electronic device is in a fixed mode or a wearable mode; a wake-up unit 502 configured to wake up a camera of the electronic device in a manner matching the installation state information; an image acquisition unit 503 configured to acquire a text image to be recognized; a sending unit 504 configured to send the text image to be recognized to the target device so as to cause the target device to generate a text recognition result.
In the present embodiment, in the apparatus 500 for recognizing text: the detailed processing of the state obtaining unit 501, the waking unit 502, the image obtaining unit 503 and the sending unit 504 and the technical effects thereof can refer to the related descriptions of step 401, step 402, step 403 and step 404 in the corresponding embodiment of fig. 4, which are not described herein again.
In some optional implementations of the present embodiment, the wake-up unit 502 may include a detection module (not shown in the figure) and a wake-up module (not shown in the figure). Wherein the detection module may be configured to determine whether the camera is in a fixed mode or a wearable mode based on the installation state information; in response to determining that the camera is in the fixed mode, a wake-up instruction is detected. The wake-up module may be configured to wake up the camera in response to detecting the wake-up instruction.
In some optional implementations of this embodiment, the wake-up unit 502 may be further configured to wake up the camera in response to determining that the camera is in the wearable mode.
In some optional implementations of this embodiment, the installation state information may also be used to indicate whether the electronic device is electrically connected to the chargeable base. The above-described apparatus 500 for recognizing text may further include a charging indication unit (not shown in the drawings). Wherein the charging indication unit may be configured to transmit information indicating that charging of a built-in battery of the electronic device is permitted in response to determining that the installation state information indicates that the electronic device is electrically connected to the chargeable dock.
In some optional implementations of the present embodiment, the apparatus 500 for recognizing text may further include a power supply indication unit (not shown in the figure). Wherein the power indication unit may be configured to send information indicating that power supply of a built-in battery of the electronic device is enabled in response to determining that the camera is in the wearable mode.
In the apparatus provided by the above embodiment of the present application, the status obtaining unit 501 obtains the installation status information of the electronic device, where the installation status information is used to indicate that the electronic device is in the fixed mode or the wearable mode; then, the wake-up unit 502 wakes up the camera of the electronic device in a manner of matching with the installation state information; next, the image acquisition unit 503 acquires a text image to be recognized; finally, the sending unit 504 sends the text image to be recognized to the target device, so that the target device generates a text recognition result. Therefore, the awakening mode is adjusted according to different installation states of the electronic equipment for text recognition, the energy consumption is reduced when the electronic equipment is in a fixed mode, the text recognition requirement is responded in time in a wearable mode, and the technical effect of considering both the energy consumption requirement and the use requirement is achieved.
Referring now to fig. 6, shown is a schematic diagram of an electronic device (e.g., terminal device 101 of fig. 1)600 suitable for implementing embodiments of the present application. The terminal device in the embodiments of the present application may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a PDA (personal digital assistant), a PAD (tablet), a point-and-read device, and the like. The terminal device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the use range of the embodiment of the present application.
As shown in fig. 6, the electronic apparatus 600 may include a Central Processing Unit (CPU)601, a memory 602, an input unit 603, and an output unit 604, wherein the central processing unit 601, the memory 602, the input unit 603, and the output unit 604 are connected to each other through a bus 605. Here, the method according to an embodiment of the present application may be implemented as a computer program and stored in the memory 602. The central processor 601 in the electronic device 600 specifically implements the text recognition function defined in the method of the embodiment of the present application by calling the above-described computer program stored in the memory 602. In some implementations, the input unit 603 may include a built-in sensor and a camera. The output unit 604 may be a display screen or the like that can be used to display the text recognition result. Thus, the central processor 601, when calling the above-described computer program to execute the text recognition function, can control the input unit 603 to acquire the installation state information from the outside and control the output unit 604 to display the text recognition result.
It should be noted that the computer readable medium described in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (Radio Frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring installation state information of the electronic equipment, wherein the installation state information is used for indicating that the electronic equipment is in a fixed mode or a wearable mode; awakening a camera of the electronic equipment in a mode of being matched with the installation state information; acquiring a text image to be recognized; and sending the text image to be recognized to the target equipment so as to enable the target equipment to generate a text recognition result.
Computer program code for carrying out operations for embodiments of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor comprises a state acquisition unit, a wake-up unit, an image acquisition unit and a sending unit. The names of these units do not in some cases constitute a limitation on the units themselves, and for example, the status acquisition unit may also be described as a unit that acquires installation status information of the electronic device, where the installation status information indicates that the electronic device is in a fixed mode or a wearable mode.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present application is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) features with similar functions disclosed in the embodiments of the present application are mutually replaced to form the technical solution.

Claims (10)

1. An electronic device for recognizing text, comprising:
a built-in sensor configured to generate installation status information, wherein the installation status information indicates that the electronic device is in a fixed mode or a wearable mode;
the camera is configured to acquire a text image to be recognized;
a processor configured to wake up the camera in a manner matching the installation state information;
and the communication module is configured to send the text image to be recognized to target equipment so that the target equipment generates a text recognition result.
2. The electronic device of claim 1, wherein the processor is further configured to:
determining whether the camera is in a fixed mode or a wearable mode based on the installation state information;
in response to determining that the camera is in a fixed mode, detecting a wake-up instruction;
and responding to the detected awakening instruction, and awakening the camera.
3. The electronic device of claim 1, wherein the processor is further configured to:
in response to determining that the camera is in a wearable mode, waking up the camera.
4. The electronic device of claim 2 or 3, wherein the electronic device further comprises a built-in battery.
5. The electronic device of claim 4, wherein the installation status information is further to indicate whether the electronic device is electrically connected to a chargeable base; and
the processor is further configured to:
in response to determining that the installation status information indicates that the electronic device is electrically connected with a rechargeable dock, sending information characterizing that charging of the built-in battery is permitted.
6. The electronic device of claim 4, wherein the processor is further configured to:
in response to determining that the camera is in a wearable mode, sending information characterizing enablement of the built-in battery power.
7. The electronic device of claim 1, wherein the built-in sensor comprises an infrared sensor; and
the built-in sensor is further configured to:
and generating installation state information for indicating that the camera is in a wearable mode in response to determining that the human body part is detected within a preset sensing range.
8. A method for recognizing text, applied to an electronic device as claimed in any one of claims 1 to 7, comprising:
acquiring installation state information of the electronic equipment;
awakening the camera of the electronic equipment in a mode of being matched with the installation state information;
acquiring a text image to be recognized;
and sending the text image to be recognized to target equipment so that the target equipment generates a text recognition result.
9. A text recognition apparatus for an electronic device, applied to the electronic device of any one of claims 1-7, comprising:
a status acquisition unit configured to acquire installation status information of the electronic device;
a wake-up unit configured to wake up a camera of the electronic device in a manner matching the installation state information;
an image acquisition unit configured to acquire a text image to be recognized;
and the sending unit is configured to send the text image to be recognized to target equipment so that the target equipment generates a text recognition result.
10. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method of claim 8.
CN202010118376.8A 2020-02-26 2020-02-26 Electronic device, method, apparatus, and medium for recognizing text Active CN112308075B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010118376.8A CN112308075B (en) 2020-02-26 2020-02-26 Electronic device, method, apparatus, and medium for recognizing text

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010118376.8A CN112308075B (en) 2020-02-26 2020-02-26 Electronic device, method, apparatus, and medium for recognizing text

Publications (2)

Publication Number Publication Date
CN112308075A CN112308075A (en) 2021-02-02
CN112308075B true CN112308075B (en) 2022-06-10

Family

ID=74336552

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010118376.8A Active CN112308075B (en) 2020-02-26 2020-02-26 Electronic device, method, apparatus, and medium for recognizing text

Country Status (1)

Country Link
CN (1) CN112308075B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105677172A (en) * 2015-12-28 2016-06-15 上海摩软通讯技术有限公司 Capture method of field images, and mobile terminal
CN107168539A (en) * 2017-06-27 2017-09-15 乐视致新电子科技(天津)有限公司 A kind of equipment awakening method, device and electronic equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10185416B2 (en) * 2012-11-20 2019-01-22 Samsung Electronics Co., Ltd. User gesture input to wearable electronic device involving movement of device
CN108986805B (en) * 2018-06-29 2019-11-08 百度在线网络技术(北京)有限公司 Method and apparatus for sending information

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105677172A (en) * 2015-12-28 2016-06-15 上海摩软通讯技术有限公司 Capture method of field images, and mobile terminal
CN107168539A (en) * 2017-06-27 2017-09-15 乐视致新电子科技(天津)有限公司 A kind of equipment awakening method, device and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Energy efficient camera node activation control in multi-tier wireless visual sensor networks;Hadi S. Aghdasi et al.;《Wireless Networks》;20120821;第725–740页 *
电源优化管理的嵌入式视频监控系统设计与实现;陆文;《中国优秀硕士学位论文全文数据库 信息科技辑》;20151015;第2015年卷(第10期);第I136-157页 *

Also Published As

Publication number Publication date
CN112308075A (en) 2021-02-02

Similar Documents

Publication Publication Date Title
US9996109B2 (en) Identifying gestures using motion data
CN109154858B (en) Intelligent electronic device and operation method thereof
US20230376549A1 (en) Determining relevant information based on user interactions
KR102512614B1 (en) Electronic device audio enhancement and method thereof
CN111104980A (en) Method, device, equipment and storage medium for determining classification result
KR20150045637A (en) Method for operating user interfacing and electronic device thereof
CN112860169A (en) Interaction method and device, computer readable medium and electronic equipment
CN110705614A (en) Model training method and device, electronic equipment and storage medium
CN111681655A (en) Voice control method and device, electronic equipment and storage medium
CN109600301B (en) Message processing method and device
CN110209335A (en) The control method and device of intelligent terminal
CN112308075B (en) Electronic device, method, apparatus, and medium for recognizing text
WO2021017823A1 (en) Charging method, electronic device and storage medium
CN110958352B (en) Network signal display method, device, storage medium and mobile terminal
CN111797017A (en) Method and device for storing log, test equipment and storage medium
CN111862972A (en) Voice interaction service method, device, equipment and storage medium
CN111159551A (en) Display method and device of user-generated content and computer equipment
CN113744736A (en) Command word recognition method and device, electronic equipment and storage medium
CN113343709A (en) Method for training intention recognition model, method, device and equipment for intention recognition
CN113673224B (en) Method and device for recognizing popular vocabulary, computer equipment and readable storage medium
CN111506439B (en) Data acquisition method and device, storage medium and mobile terminal
CN111064846A (en) Head-mounted equipment and voice secretary setting method and device
CN114299945A (en) Voice signal recognition method and device, electronic equipment, storage medium and product
CN114816871A (en) Method, device, equipment and storage medium for anomaly detection
CN112463603A (en) Memory leak detection method and device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant