CN113608813A - Processing method, intelligent terminal and storage medium - Google Patents

Processing method, intelligent terminal and storage medium Download PDF

Info

Publication number
CN113608813A
CN113608813A CN202110915449.0A CN202110915449A CN113608813A CN 113608813 A CN113608813 A CN 113608813A CN 202110915449 A CN202110915449 A CN 202110915449A CN 113608813 A CN113608813 A CN 113608813A
Authority
CN
China
Prior art keywords
language
target object
display
data
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110915449.0A
Other languages
Chinese (zh)
Inventor
张鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Chuanying Information Technology Co Ltd
Original Assignee
Shanghai Chuanying Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Chuanying Information Technology Co Ltd filed Critical Shanghai Chuanying Information Technology Co Ltd
Priority to CN202110915449.0A priority Critical patent/CN113608813A/en
Publication of CN113608813A publication Critical patent/CN113608813A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/454Multi-language systems; Localisation; Internationalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation

Abstract

The application provides a processing method, an intelligent terminal and a storage medium, wherein the processing method comprises the following steps: responding to a display instruction of a first language, translating a target object by adopting the first language when the first language exceeds the resource range of the target object, and determining or generating display data for representing the target object by adopting the first language; and loading the display data for the target object, and displaying the target object by adopting the first language. According to the method and the device, the target object can be displayed in the first language, the requirement of a user for using the first language is responded, and the flexibility and the corresponding user experience of the target object such as the intelligent terminal and/or the application program in the language switching process are improved.

Description

Processing method, intelligent terminal and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a processing method, an intelligent terminal, and a storage medium.
Background
With the rapid development of computer technology, intelligent terminals such as mobile phones, tablet computers and/or computers have been deeply inserted into various groups in various regions, and the groups using the intelligent terminals often have different language backgrounds, so that the intelligent terminals need to load multiple language resources for calling when the intelligent terminals and application programs thereof are displayed.
In the course of conceiving and implementing the present application, the inventors found that at least the following problems existed: when the matched target language cannot be searched locally by the intelligent terminal, only default language resources can be provided for the intelligent terminal and an application program thereof to display and call, the display requirement of the target language cannot be met, and further the user experience is influenced.
The foregoing description is provided for general background information and is not admitted to be prior art.
Disclosure of Invention
In view of the above technical problems, the present application provides a processing method, so that a user can use an intelligent terminal such as a mobile phone and an application program therein in a familiar language.
In order to solve the above technical problem, the present application provides a processing method, which can be applied to an intelligent terminal or various application programs running on the intelligent terminal, and the method includes the following steps:
s36, responding to a display instruction of a first language, when the first language exceeds the resource range of a target object, translating the target object by adopting the first language, and determining or generating display data for representing the target object by adopting the first language;
s37, loading the display data for the target object, and displaying the target object by adopting the first language.
Optionally, the first language is beyond the resource scope of the target object, including: the first language is beyond the resource scope of the preset database record of the target object.
Optionally, the obtaining process of the display instruction includes:
responding to a switching instruction of a first language;
if the target object starts an intelligent language switching function, determining or generating a display instruction of the first language; and/or the presence of a gas in the gas,
if the target object does not start the intelligent switching language function, outputting prompt information for starting the intelligent switching language function, and determining or generating a display instruction of the first language after responding to a starting instruction corresponding to the prompt information.
Optionally, in the step S36, translating the target object in the first language, and determining or generating display data representing the target object in the first language includes:
sending a first object display content to at least one translation end associated with the first language;
and determining target translation content from the initial translation content provided by the at least one translation terminal, and determining or generating the display data according to the target translation content.
Optionally, the translation end may be a translation application or service, or may be another terminal or device capable of implementing a translation function.
Optionally, a display interface of the target object is displayed through preset information;
the step of S37 includes: displaying a display interface of the target object, loading corresponding translation data for a preset display area of the display interface, and displaying preset information of the preset display area by adopting the first language.
Optionally, after the step of S36, the method further includes:
and S35, storing the display data into a preset file.
Optionally, the preset file is named by using newly added mark information; the step of S35 includes:
setting data mark information of the display data according to the newly added mark information and language mark information corresponding to the display data;
and storing the display data carrying the data mark information to the preset file.
The application also provides a processing method, which is applied to the intelligent terminal or various application programs running on the intelligent terminal, and the processing method comprises the following steps:
s13, responding to a new instruction of a first language, and determining the language beyond the range as the new language when the first language exceeds the resource range of the target object;
s14, translating the target object by adopting the new language, and determining or generating new language data representing the target object by the new language;
and S15, saving the newly added language data to a preset file.
Optionally, the first language is beyond the resource scope of the target object, including: the first language is beyond the resource scope of the preset database record of the target object.
Optionally, the obtaining process of the new instruction includes:
responding to newly added trigger information;
if the target object starts an intelligent newly-added language function, determining or generating the resource newly-added instruction; and/or the presence of a gas in the gas,
and if the target object does not start the intelligent newly-added language function, outputting prompt information for starting the intelligent newly-added language function, and determining or generating the newly-added instruction after responding to a starting instruction corresponding to the prompt information.
Optionally, the preset file is named by using newly added mark information; the step of S15 includes:
setting data mark information of the newly added language data according to the newly added mark information and language mark information corresponding to the newly added language data;
and storing the newly added language data carrying the data mark information to the preset file.
Optionally, the method further comprises:
s16, responding to a display instruction of a second language, and searching display data representing the target object by the second language;
s17, loading the display data for the target object to display the target object in the second language.
Optionally, the searching for display data representing the target object in the second language includes: and searching display data representing the target object by adopting the second language in a preset database.
The application also provides an intelligent terminal, including: the device comprises a memory and a processor, wherein the memory stores a processing program, and the processing program realizes the steps of the processing method when being executed by the processor.
The present application also provides a computer storage medium storing a computer program which, when executed by a processor, implements the steps of the processing method as described above.
As described herein, the processing method of the present application may be applied to an intelligent terminal and/or an application program, and by responding to a display instruction of a first language, when the first language is beyond a resource range of a target object, the target object is translated by using the first language, display data representing the target object by using the first language is determined or generated, and the display data is loaded for the target object, so as to display the target object by using the first language, so as to respond to a requirement of a user for using the first language, and improve flexibility and corresponding user experience of the target object such as the intelligent terminal and/or the application program in a language switching process.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic diagram of a hardware structure of an intelligent terminal implementing various embodiments of the present application;
fig. 2 is a communication network system architecture diagram according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a processing method according to a first embodiment;
4a, 4b and 4c are interface diagrams of language switching modes according to an embodiment of the present application;
FIGS. 5a and 5b are schematic diagrams of a language setting interface according to an embodiment of the present application;
FIG. 6a is a schematic diagram of a default database according to an embodiment of the present application;
FIG. 6b is a schematic diagram of a preset file according to an embodiment of the present application;
FIG. 7 is a schematic flow chart diagram illustrating a processing method according to a third embodiment;
fig. 8a, 8b, 8c and 8d are interface diagrams of the newly added language according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings. With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the recitation of an element by the phrase "comprising an … …" does not exclude the presence of additional like elements in the process, method, article, or apparatus that comprises the element, and further, where similarly-named elements, features, or elements in different embodiments of the disclosure may have the same meaning, or may have different meanings, that particular meaning should be determined by their interpretation in the embodiment or further by context with the embodiment.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "response determination", depending on the context. Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, species, and/or groups thereof. The terms "or," "and/or," "including at least one of the following," and the like, as used herein, are to be construed as inclusive or mean any one or any combination. For example, "includes at least one of: A. b, C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C ", again for example," A, B or C "or" A, B and/or C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C'. An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed alternately or at least partially with respect to other steps or sub-steps of other steps.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "response determination" or "response detection", depending on the context. Similarly, the phrase "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to determining" or "when detected (a stated condition or event)" or "in response to detecting (a stated condition or event)", depending on the context.
It should be noted that step numbers such as S13 and S14 are used herein for the purpose of more clearly and briefly describing the corresponding content, and do not constitute a substantial limitation on the sequence, and those skilled in the art may perform S14 first and then S13 in specific implementation, which should be within the scope of the present application.
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for the convenience of description of the present application, and have no specific meaning in themselves. Thus, "module", "component" or "unit" may be used mixedly.
The smart terminal may be implemented in various forms. For example, the smart terminal described in the present application may include mobile terminals such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and fixed terminals such as a Digital TV, a desktop computer, and the like.
While the following description will be given by way of example of a smart terminal, those skilled in the art will appreciate that the configuration according to the embodiments of the present application can be applied to a fixed type terminal in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of an intelligent terminal for implementing various embodiments of the present application, the intelligent terminal 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the intelligent terminal architecture shown in fig. 1 does not constitute a limitation of the intelligent terminal, and that the intelligent terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
The following specifically describes each component of the intelligent terminal with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex Long Term Evolution), and TDD-LTE (Time Division duplex Long Term Evolution).
WiFi belongs to short-distance wireless transmission technology, and the intelligent terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the smart terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the smart terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the smart terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The smart terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Optionally, the light sensor includes an ambient light sensor and a proximity sensor, the ambient light sensor may adjust the brightness of the display panel 1061 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 1061 and/or the backlight when the smart terminal 100 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the intelligent terminal. Alternatively, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Optionally, the touch detection device detects a touch orientation of a user, detects a signal caused by a touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Optionally, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited thereto.
Alternatively, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the smart terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the smart terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the smart terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the smart terminal 100 or may be used to transmit data between the smart terminal 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a program storage area and a data storage area, and optionally, the program storage area may store an operating system, an application program (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the intelligent terminal, connects various parts of the entire intelligent terminal using various interfaces and lines, and performs various functions of the intelligent terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the intelligent terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor and a modem processor, optionally, the application processor mainly handles operating systems, user interfaces, application programs, etc., and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The intelligent terminal 100 may further include a power supply 111 (such as a battery) for supplying power to each component, and preferably, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
Although not shown in fig. 1, the smart terminal 100 may further include a bluetooth module or the like, which is not described herein.
In order to facilitate understanding of the embodiments of the present application, a communication network system on which the intelligent terminal of the present application is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present disclosure, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Optionally, the UE201 may be the terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Alternatively, the eNodeB2021 may be connected with other enodebs 2022 through a backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving gateway) 2034, a PGW (PDN gateway) 2035, and a PCRF (Policy and Charging Rules Function) 2036, and the like. Optionally, the MME2031 is a control node that handles signaling between the UE201 and the EPC203, providing bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present application is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems.
Based on the above intelligent terminal hardware structure and communication network system, various embodiments of the present application are provided.
In a first embodiment, as shown with reference to fig. 3, the present application provides a processing method, as shown with reference to fig. 1, including S36 and S37:
and S36, responding to the display instruction of the first language, translating the target object by adopting the first language when the first language exceeds the resource range of the target object, and determining or generating display data for representing the target object by adopting the first language.
Optionally, the first language is beyond the resource scope of the target object, including: the first language is beyond the resource scope of the preset database record of the target object.
The target object may include the smart terminal and/or various applications on the smart terminal. The first language is generally a language that is daily used by the user, such as a native language and/or a working language of the user, and may be embodied as chinese, english, japanese, korean, other national or national languages, and the like. The target object is displayed by adopting the first language, so that the intelligent terminal can be popularized in all groups with different language habits in all regions, and the experience brought to various users by the intelligent terminal is improved.
After monitoring the switching instruction of the first language, the intelligent terminal can directly determine or generate a display instruction of the first language; the current switching response mode can be further identified, and the display instruction of the first language is determined or generated according to the current switching response mode. In one implementation, the obtaining of the display instruction includes:
responding to a switching instruction of a first language;
if the target object starts the intelligent language switching function, determining or generating a display instruction of a first language; and/or the presence of a gas in the gas,
and if the target object does not start the intelligent switching language function, outputting prompt information for starting the intelligent switching language function, and determining or generating a display instruction of the first language after responding to a starting instruction corresponding to the prompt information.
Alternatively, the switching response mode of the target object may include an intelligent switching mode having a language automatic switching function and a non-intelligent switching mode having no language automatic switching function. Alternatively, the target object may set a language switching mode control interface as shown in fig. 4a and fig. 4b on the setting interface, where fig. 4a shows that the target object starts the intelligent switching mode and has the intelligent switching language function, and fig. 4b shows that the target object does not start the intelligent switching mode and does not have the intelligent switching language function. In the intelligent switching mode, the intelligent terminal directly determines or generates a display instruction after monitoring the switching instruction of the first language so as to improve the efficiency of acquiring the display instruction by the intelligent terminal. In the non-intelligent switching mode, after the intelligent terminal monitors the switching instruction, prompt information of whether the current mode is switched to the intelligent switching mode (whether the language automatic switching function is started) or not can be output through modes such as popup display and/or voice playing, the user receives the prompt information, if the intelligent switching mode is not required to be switched, confirmation information which is not switched to the intelligent switching mode can be input, the target object exits the current language switching operation, and/or if the intelligent switching mode is required to be switched, confirmation information which is switched to the intelligent switching mode can be input, the target object is switched to the intelligent switching mode, and after the language automatic switching function is started, the display instruction of the first language is determined or generated, so that the determined or generated instruction is guaranteed to be authorized by the user, and user experience is further improved. Optionally, if it is required to switch to the intelligent switching mode, after the user inputs the confirmation information of switching to the intelligent switching mode to the target object, the target object may jump to the language response mode interface shown in fig. 4b, as shown in fig. 4c, the user clicks the button corresponding to the intelligent switching on the language switching mode interface to start the intelligent switching mode, and after the intelligent switching mode is started, the language response mode interface is shown in fig. 4 a.
Alternatively, the switching instruction in the first language may include a switching instruction generated by the target object in response to any switching operation input by the user and/or a switching instruction determined or generated by at least one application program in the corresponding intelligent terminal, and the like. Optionally, the switching instruction may include at least one of:
switching operation of the language switching control of the target object to a first language is detected; for example, a user clicks a trigger control corresponding to a first language in a language setting interface of a target object or inputs a name of the target language into a corresponding language search box, so that the intelligent terminal monitors a switching instruction of the first language; here, a detailed description will be given by taking a target object of a mobile phone as an example, where the language setting interface of the mobile phone is shown in fig. 5a, if a user needs to display the mobile phone in english (british), the user may click a button corresponding to english (british) on the interface to input a switching instruction corresponding to english (british), and if the user needs to display the mobile phone in mongolian, since the displayed language list does not include mongolian, the user may input mongolian in a search box below another language at this time to input a switching instruction corresponding to mongolian to the mobile phone.
When the target object is started in an environment adopting a first language and the current language is inconsistent with the first language, switching to an instruction of the first language; for example, when an application program of the intelligent terminal is started, the type of the language adopted by the intelligent terminal may be identified, the language adopted by the intelligent terminal is taken as the first language, and if the current language adopted by the application program is not consistent with the first language, the switching instruction of the first language is determined or generated.
The example can determine or generate the switching instruction of the first language according to the instruction of the target object input by the user and/or the language environment in which the target object is located, and can improve the flexibility in determining or generating the switching instruction.
Optionally, the target object, such as the intelligent terminal and each application installed on the intelligent terminal, has a corresponding preset database (e.g., a language resource library) for recording language data representing the target object by using various representations. The preset database may record an initial language configuration file of the target object, and may also record other language data (e.g., a preset file) of the target object. The initial language configuration file records each original language data configured by the target object, and each original language data represents the target object by adopting each language originally configured by the target object. Step S36, searching each part of language data in a preset database, if no language data matching the first language is found, determining that the first language is beyond the resource range of the target object, at this time, translating the target object by using the first language, and determining or generating the required display data; if the language data matched with the first language is found, the found file is used as display data, the display data is loaded, the target object is directly displayed in the first language, and the display efficiency is guaranteed. Optionally, when searching for display data, the target object may first traverse the preset file to search for display data, and if no display data is found, traverse the initial language configuration file to search for language data of each part in the preset database.
Optionally, the target object may be translated by using a translation end, where the translation end may be translation software (such as an online translation application and/or an offline translation application) pre-installed on a corresponding intelligent terminal, or may be a translation terminal in communication connection with the target object. Optionally, at least one translation end translation target object may be adopted to obtain translation contents provided by each translation end, so as to determine the translation contents finally adopted, determine or generate the newly added language data, and ensure the accuracy of the obtained newly added language data.
Optionally, the target object adopts the first object display content to represent the content of each current display interface of the target object; in step S36, translating the target object in the first language, and determining or generating display data representing the target object in the first language includes:
sending the first object display content to at least one translation end associated with the first language;
and determining target translation content from the initial translation content provided by at least one translation end, and determining or generating display data according to the target translation content.
Optionally, the first object display content may be characterized by a general language with a high frequency of use, or may be characterized by characters that can be recognized by various translation terminals, such as hash characters, so that after receiving the first object display content, the translation terminal can accurately recognize the first object display content, and the accuracy of the obtained target translation content is ensured. The target translation content obtained by translating the display content of the first object into the first language may be recorded in a format (for example, may be a proto file) capable of being recognized by the target object, so that the target object directly loads corresponding display data in a subsequent process, and the corresponding display data is accurately read and used. Optionally, the first object display content includes text information and/or audio information that each display interface of the target object needs to display; the display interface display system can comprise at least one unit content, and each unit content can carry information such as corresponding marks of each display interface, display positions and/or display opportunities related to the corresponding display interface and the like, so that each unit content can be displayed by adopting a corresponding language when the target object displays each display interface; the content of each unit may also carry display parameters, such as font and font size of the display text, and/or tone, volume, etc. of the corresponding audio, so as to ensure the content display effect of each display interface.
Optionally, the object recognition function of the translation end and the format of the determined or generated translation content may be matched with the format of the first object display content, if the first object display content includes text information, the translation end has a character recognition function, and can accurately recognize the first object display content represented by the character format and translate the first object display content into corresponding characters; and/or if the first object display content comprises audio information, the translation end has a voice recognition function, can accurately recognize the first object display content represented in the audio form and translates the first object display content into corresponding audio; and/or if the first object display content comprises text information and audio information, the translation end has a character recognition function and a voice recognition function, can accurately recognize the first object display content represented in each form, translates the text information into corresponding characters, and translates the audio information into corresponding audio. Optionally, if the translation end can only recognize text information, determining or generating the translation content characterized in the form of characters. When the first object display content includes audio information, the target object may first send the first object display content to a content conversion end, and the content conversion end may recognize text information and audio information of the first object display content and convert the text information and the audio information into each other; the content conversion end identifies the audio information of the first object display content, records audio attribute information such as time and/or position of the audio information, converts the audio information into corresponding text information, combines the text information obtained by conversion and the original text information of the first object display content into new object display content, sends the new object display content to the translation end, after the translation end obtains the new translation content, the translated content is sent to a content conversion end, the content conversion end extracts corresponding audio translation words by using audio attribute information, the audio translation words are converted into audio to obtain translated audio, and determining or generating corresponding language data according to the translated audio and the translated content (the translated words corresponding to the text information) of the translated words of the extracted audio so as to ensure the accuracy and the integrity of the obtained language data.
Optionally, the present implementation may select the content with the highest accuracy from each initial translation content in a sentence number, a number of words, and/or a related symbol ratio peer-to-peer manner; or selecting a translation end with high reliability as an evaluation end, and evaluating each initial translation content by using the evaluation end to obtain a content with highest accuracy as a target translation content; further ensuring the accuracy of the obtained display data.
And S37, loading display data for the target object and displaying the target object by adopting the first language.
And S37, loading display data for the target object, enabling the target object to display each display interface in the first language, responding to the requirement of a user for using the first language, improving the flexibility of the intelligent terminal in the language switching process, improving the user experience brought by the intelligent terminal, and enabling the intelligent terminal to be accepted by various groups in various regions more easily. Optionally, the target object displays a corresponding state through at least one display interface; for example, the intelligent terminal displays a startup state through a startup interface and displays a shutdown state through a shutdown interface; the communication application program displays the communication state through each chat interface and/or address list interface and the like; the shopping application program displays the shopping state through a home page display interface, each brand commodity display interface, each commodity purchasing interface and the like; the setting application program displays each setting state through each setting interface. The target object is displayed according to the loaded display data, so that when the target object displays each display interface, characters corresponding to the first language can be displayed in the display area, prompt information expressed by the first language is popped up, and/or voice corresponding to the first language is played, so that the display effect corresponding to the first language is ensured in all aspects.
Optionally, the display interface of the target object passes through preset information; the preset information may include information for defining the target object display content, such as the first object display content; which may include text information and/or audio information; correspondingly, the display data comprises text translation data corresponding to the text information and/or audio translation data corresponding to the audio information;
the step of S37 includes: displaying a display interface of the target object, loading corresponding translation data for a preset display area of the display interface, and displaying preset information of the preset display area by adopting the first language. When the preset information includes text information and audio information, the S37 step may further include: and loading corresponding text translation data for the preset display area of each display interface to display the text information of the preset display area by adopting a first language, and loading corresponding audio translation data at the voice playing time of each display interface to play the corresponding audio information at the voice playing time by adopting a second language.
The preset information such as the display content of the first object can comprise at least one unit content, and each unit content can carry mark information of a corresponding display interface, information such as a display position and/or a display opportunity of the corresponding display interface, and the like; correspondingly, the text translation data carries the mark information of the corresponding display interface and the display position of the corresponding display interface, and the audio translation data carries the mark information of the corresponding display interface and the display timing relative to the corresponding display interface, so that the target object can display the corresponding text translation data in each text display area when displaying each display interface, for example, referring to fig. 5b after switching to english (such as british) for displaying the display interface shown in fig. 5a, and can play the corresponding audio information at each voice display timing to ensure the display effect of the target object.
Optionally, after the step of S36, the processing method further includes:
s35, storing the display data into a preset file; the preset file is used for storing language data which represents the target object and exceeds the recording range of the initial language configuration file of the target object.
This implementation will show that data storage to the file of predetermineeing of target object, when the target object adopts first language to show next time like this, can directly read the display data from predetermineeing the file, load and show, can improve follow-up display efficiency, reduce shared resource among the follow-up display process to promote this in-process intelligent terminal's operating rate.
Optionally, the target object may set the preset file when determining or generating the language data exceeding the recording range of the initial language configuration file for the first time, or may set the preset file in advance at the time of installation or initialization; the preset file is used for recording other language data which is not recorded by the initial language configuration file and represents the target object. Alternatively, as shown in fig. 6a, the preset file may be stored in the preset database of the target object together with the initial language configuration file, so that the target object may sequentially traverse the files in the preset database in the language data required for searching; and the judgment process of whether the first language exceeds the preset database range of the target object can be more complete and accurate.
Optionally, the preset file is named by the newly added mark information; the step of S35 includes:
setting data mark information of the display data according to the newly added mark information and language mark information corresponding to the display data;
and storing the display data carrying the data mark information to a preset file.
The newly added mark information may correspond to the preset file one to one, and may be a unique mark of the preset file, for example, if the object mark information of the target object includes the name and the version number of the target object, the name and the version number of the target object may be set as the newly added mark information of the corresponding preset file, so that the newly added mark information is matched with the object mark information of the target object, and subsequent searching is facilitated. The data flag information of the display data includes newly added flag information corresponding to the preset file and language flag information representing the first language, for example, if the newly added flag information of the preset file of a certain target object is key and the language flag information of the first language is value, the data flag information of the display data may be key-value. Therefore, the data mark information of the display data corresponds to the newly added mark information of the corresponding preset file, so that the display data and the preset file can be quickly matched according to the data mark information and the newly added mark information.
Optionally, the processing method further includes:
when the target object is updated, acquiring updated second object display content, and updating language data of each part of the preset file according to the second object display content;
and/or the presence of a gas in the gas,
when the object mark information of the target object is updated, updating the newly added mark information of the preset file according to the updated object mark information, and respectively updating the data mark information of each part of language data in the preset file according to the updated newly added mark information.
The example can monitor the update behaviors of the version upgrade of the target object and the like to obtain the object display content change condition and/or the object mark information condition of the target object in the update process, and update each part of the language data and/or the newly added mark information of the preset file according to various change conditions, so that each part of the language data and/or the newly added mark information of the preset file are matched with the updated target object.
Optionally, if the target object only updates the display content, the updated display content of the second object may be obtained, and the language data of each part of the preset file is updated according to the display content of the second object, so as to ensure the accuracy of the content represented by the language data of each part. And/or, if the target object only updates the object mark information (such as version number and the like), the updated object mark information can be acquired at this time, the newly added mark information is updated according to the updated object mark information, and the data mark information of each part of the language data in the preset file is respectively updated according to the updated newly added mark information, so that the data mark information, the newly added mark information and the object mark information of each part of the language data are matched with each other, and the accuracy of the relevant matching or searching process is ensured. And/or if the target object updates the display content and the object mark information at the same time, acquiring the updated second object display content, and updating each part of language data according to the second object display content; and updating the newly added mark information according to the updated object mark information, and respectively updating the data mark information of each part of language data in the preset file according to the updated newly added mark information.
Optionally, in an example, the process of updating the respective part language data may include: and identifying the updated content of the display content of the second object, acquiring various language translation contents corresponding to the updated content through the translation end, and fusing the various language translation contents and the corresponding language data in the preset file to realize the updating of the language data of each part. In another example, the above process of updating the partial language data may also include: and respectively generating the display content of the second object to translation ends corresponding to various languages, acquiring various updated language data representing the display content of the second object, which are provided by the translation ends, and replacing the original language data of each part in the preset file by the various updated language data so as to update the language data of each part.
Optionally, the processing method further includes: s31, a preset file of the target object is set. The example can preset a preset file in the time of an installation process, an initialization process or other object setting processes, and the like, and is used for recording other language data which are not recorded by the initial language configuration file and representing the target object, so that when the target object monitors a display instruction or a switching instruction of a certain language, the newly added language data can be searched in the preset file, the corresponding language data can be directly loaded, and the loading efficiency is improved.
Optionally, after the preset file of the target object is set, the access authority of the preset file can be configured; for example, it may be set that the running process of the target object can access the preset file, or the preset file can be accessed only in some modes (such as an intelligent switching mode) of the target object, and so on.
Optionally, the target object may set icons of each part of the language data according to language signs of various languages, display corresponding language data on a display interface of the preset file through each icon, and if the language data included in the preset file is too much and is difficult to be displayed on one display interface, the preset file may include at least one display interface to display the icon of the corresponding language data on each display interface. For example, referring to fig. 6B, a display interface of the preset file is shown, which displays icons of language data corresponding to languages a, B, C, D, E, F, G, H and I, respectively; the display interface shown in fig. 6b is an interface corresponding to the middle of the preset file, and both the left and right sides of the display interface include an interface for displaying other language data icons.
Optionally, the processing method may further include:
responding to a resource adding instruction of a second language, and determining a new language according to the language resource beyond the range when the second language exceeds the resource range; the second language includes at least one language;
adopting each newly added language to translate the target object, and determining or generating newly added language data of representing the target object by each newly added language;
and storing the newly added language data into a preset file.
Optionally, the second language is out of resource scope, and may be: the second language is beyond the resource range of the preset database.
The new language may include one language or multiple languages, and if the new language includes multiple languages, in this example, the target objects need to be translated by using the respective languages, so as to determine or generate language data corresponding to the respective languages, and determine the determined or generated language data of each part as the new language data.
The implementation mode adds the language data which can be used subsequently in the preset file in advance, so that when the target object is displayed by adopting any one of the languages next time, the corresponding language data can be directly read from the preset file for loading and displaying, on the basis of responding to the language used by the user, the subsequent display efficiency can be improved, the occupied resources in the subsequent display process are reduced, and the running speed of the corresponding intelligent terminal in the process is improved.
The processing method provided by this embodiment may respond to the display instruction of the first language, when the first language exceeds the resource range of the target object, translate the target object by using the first language, determine or generate display data representing the target object by using the first language, load the display data for the target object, and display the target object by using the first language, so as to respond to a requirement of a user for using the first language, and improve flexibility and corresponding user experience of the target object such as an intelligent terminal and/or an application program in a language switching process.
On the basis of the foregoing embodiments, a second embodiment of the present application provides a language resource loading system, including:
the first determining or generating module may be configured to respond to a display instruction in a first language, translate the target object in the first language when the first language is beyond a resource range of the target object, and determine or generate display data representing the target object in the first language;
and the loading module can be used for loading display data for the target object and displaying the target object by adopting the first language.
For the specific limitation of the loading system of the language resource, reference may be made to the above limitation on the processing method, which is not described herein again. The modules in the loading system of the language resources can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In the third embodiment, referring to fig. 7, the present application provides a processing method including steps S13, S14, and S15:
s13, responding to the resource newly-increased instruction of the first language, and determining the language beyond the range as the newly-increased language when the first language exceeds the resource range of the target object; the first language includes at least one language.
Optionally, the first language is beyond the resource scope of the target object, including: the first language is beyond the resource scope of the preset database record of the target object.
The target object may include the smart terminal and/or various applications on the smart terminal. The first language to be newly added includes at least one language that the user needs to use in daily life, such as the native language and/or working language of the user, and the like, and may include languages of chinese, english, japanese, korean, mongolian, Tibetan, other countries and/or other nationalities, and the like. And adding resources corresponding to the first language for the target object, so that corresponding language data can be directly loaded when the target object needs to be displayed by adopting any one of the first languages, and each display interface is displayed by the corresponding language.
The intelligent terminal and the target objects such as the application programs and the like installed on the intelligent terminal are provided with corresponding preset databases and are used for recording language data adopting various representation target objects. The preset database may record an initial language configuration file of the target object, and may also record a file (e.g., a preset file) storing other language data of the target object. The initial language configuration file records each original language data configured by the target object, and each original language data represents the target object by adopting each language originally configured by the target object. Step S13, traversing each part of the language data in the preset database, if the language data corresponding to each language in the first language is not found, determining that the first language exceeds the resource range of the target object, and determining the language beyond the range (the language currently not supported by the target object) as a new language, where the new language includes at least one language; determining or generating newly-added language data of the target object represented by each newly-added language by adopting each newly-added language translation target object so as to realize language resource newly-added; if the preset database comprises language data respectively corresponding to various languages in the first language, the first language is judged not to exceed the resource range recorded by the preset database, and at the moment, the resource adding program can be quitted, so that the resource occupation is reduced, and the running speed of the target object is improved.
Optionally, after the target object monitors the new trigger information in the first language, the target object may directly determine or generate a resource new instruction; the language response mode of the target object can be further identified, and the resource addition instruction of the first language is determined or generated according to the current language response mode. In one implementation, the obtaining of the resource addition instruction includes:
responding to newly added trigger information;
if the target object starts the intelligent newly-added language function, determining or generating a resource newly-added instruction; and/or the presence of a gas in the gas,
and if the target object does not start the intelligent newly-added language function, outputting prompt information for starting the intelligent newly-added language function, and determining or generating a resource newly-added instruction after responding to a starting instruction corresponding to the prompt information.
Optionally, the language response mode of the target object may include an intelligent adding mode with an intelligent adding language function and a non-intelligent adding mode without the intelligent adding language function. In the intelligent adding mode, the target object directly determines or generates a switching instruction of the first language after monitoring the adding triggering information of the target language so as to improve the efficiency of adding language resources to the target object. In the non-intelligent adding mode, after the target object monitors the adding trigger information, the prompt information of whether the current mode is switched to the intelligent adding mode (whether the intelligent adding language function is started) or not can be output through modes such as popup display and/or voice playing, the user receives the prompt information, if the mode does not need to be switched to the intelligent adding mode, the user can input confirmation information which is not switched to the intelligent adding mode, so that the target object exits the adding operation of the current first language, and the original language data in the initial language configuration file is used for display subsequently; and/or if the mode needs to be switched to the intelligent adding mode, inputting confirmation information for switching to the intelligent adding mode, switching the target object to the intelligent adding mode, starting the intelligent adding language function, and determining or generating a resource adding instruction in the first language, so as to ensure that the determined or generated resource adding instruction is authorized by the user, and further improve the user experience.
Optionally, the new trigger information in the first language may include trigger information generated by the target object in response to any new operation input by the user, and/or new trigger information determined or generated by the target object and at least one application program of the intelligent terminal where the target object is located, and the like. Optionally, the monitoring of the new trigger information of the new first language of the target object includes at least one of:
and detecting the operation of adding the first language by the language adding interface of the target object. For example, the user clicks a trigger control corresponding to the target language in the language newly added interface of the target object or inputs the name of the target language into a corresponding language search box, so that the target object monitors the newly added trigger information of the first language. Here, the target object of the mobile phone is taken as an example for detailed description, the language adding interface of the mobile phone can refer to fig. 8a, and the user can select at least one newly added target language on the language adding interface and/or input the newly added target language, so that the mobile phone monitors the operation corresponding to the newly added selected target language. After the user selects at least one newly added target language on the language newly added interface, the language newly added interface can obviously confirm the button, so that the user inputs a confirmation instruction through the confirmation button after finishing selecting the target language required to be newly added. Fig. 8b is a schematic diagram showing that a user selects a Tibetan language in the common language list of the language adding interface, searches through other language search boxes, and selects a Mongolian language and an Arabic language, and if the user only needs to add resources of the 3 languages, the user can click a confirmation button below the language adding interface as shown in fig. 8c, so that the mobile phone monitors the operation of adding the 3 languages; if the user needs to add new resources of other languages, the user can input the other languages needing to be added in the common language or through the search boxes of the other languages, and click the confirmation button below the language adding interface after selecting all the languages needing to be added, so that the mobile phone monitors the operation of adding each selected language.
Confirming the newly added information corresponding to the language newly added prompt; the language newly-added prompt is prompt information output when a positioning program at the end where the target object is located outputs a new area for the first time. For example, when the positioning program on the intelligent terminal is first positioned to the area a corresponding to the language a, the intelligent terminal and/or the application program installed therein and other target objects output a language addition prompt corresponding to the language a for the user to input whether to add the language a resource newly, and if the user inputs an instruction for confirming the language a resource newly added, the target object can monitor the confirmation addition information corresponding to the language addition prompt.
And when the target object is started and the current language is inconsistent with the starting environment language, starting the new information corresponding to the environment language. For example, when an application program of the intelligent terminal is started, the language type adopted by the intelligent terminal may be identified, the language adopted by the intelligent terminal is used as the environment language, if the environment language for starting is inconsistent with the language adopted by the application program currently, a resource corresponding to the environment language for starting needs to be newly added, and the newly added trigger information corresponding to the environment language may be determined or generated.
And newly added information corresponding to various languages supported by the machine. Optionally, when an application of the intelligent terminal is started, various languages supported by the intelligent terminal may be identified, the various languages supported by the intelligent terminal are used as languages that need to be added, and new trigger information corresponding to the languages is determined or generated, so that resources of the languages are added by the application and kept synchronous with the language resources of the intelligent terminal where the application is located.
The example can determine the new trigger information of the first language according to the new operation of the target object input by the user and/or the language environment of the target object, and can improve the flexibility in the process of determining or generating the new trigger information.
And S14, adopting each new language translation target object to determine or generate new language data of each new language representation target object.
The new language may include one language or multiple languages, and if the new language includes multiple languages, the step S14 needs to translate the target object by using each new language respectively to determine or generate corresponding language data, and determine each determined or generated part of the language data as the new language data to ensure the integrity of the new language data. Optionally, each language data such as the newly added language data is represented by at least one file, in this case, one language corresponds to one set of language subfiles, and the set of language subfiles may include one unit file or may include a plurality of unit files (e.g., text unit files and audio unit files). Alternatively, the linguistic data may be characterized by other forms of data.
The step S14 may be implemented by translating the target object using the translation end to obtain the required new language data. The translation terminal may be translation software (such as an online translation application program and/or an offline translation application program) pre-installed on an intelligent terminal where the target object is located, or may be a translation terminal in communication connection with the target object. Optionally, at least one translation end translation target object may be adopted to obtain translation contents provided by each translation end, so as to determine the translation contents finally adopted, determine or generate the newly added language data, and ensure the accuracy of the obtained newly added language data.
Optionally, the target object adopts the first object display content to record the content displayed by each current display interface; the step of S14 includes:
sending the first object display content to at least one translation end associated with the first language;
and selecting the content with the highest accuracy as the target translation content from the initial translation content provided by each translation end, and determining or generating new language data according to the target translation content.
Optionally, the first object display content may be characterized by a general language with a high frequency of use, or may be characterized by characters that can be recognized by various translation terminals, such as hash characters, so that after receiving the first object display content, the translation terminal can accurately recognize the first object display content, and the accuracy of the obtained target translation content is ensured. Language data such as newly added language data obtained by translating the display content of the first object can be recorded in a format capable of being recognized by the target object (for example, the language data can be recorded as a proto file), so that the target object directly loads the corresponding language data in the subsequent process, and the corresponding language data is accurately read and used.
Optionally, the first object display content includes text information and/or audio information that each display interface of the target object needs to display; the display interface display system can comprise at least one unit content, and each unit content can carry information such as corresponding marks of each display interface, display positions and/or display opportunities related to the corresponding display interface and the like, so that each unit content can be displayed by adopting a corresponding language when the target object displays each display interface; the content of each unit may also carry display parameters, such as font and font size of the display text, and/or tone, volume, etc. of the playing audio, so as to ensure the content display effect of each display interface.
Optionally, the object recognition function of the translation end and the format of the determined or generated translation content may be matched with the format of the first object display content, if the first object display content includes text information, the translation end has a character recognition function, and can accurately recognize the first object display content represented by the character format and translate the first object display content into corresponding characters; and/or if the first object display content comprises audio information, the translation end has a voice recognition function, can accurately recognize the first object display content represented in the audio form and translates the first object display content into corresponding audio; and/or if the first object display content comprises text information and audio information, the translation end has a character recognition function and a voice recognition function, can accurately recognize the first object display content represented in each form, translates the text information into corresponding characters, and translates the audio information into corresponding audio. Optionally, if the translation end can only recognize text information, determining or generating the translation content characterized in the form of characters. When the first object display content includes audio information, the target object may first send the first object display content to a content conversion end, and the content conversion end may recognize text information and audio information of the first object display content and convert the text information and the audio information into each other; the content conversion end identifies the audio information of the first object display content, records audio attribute information such as time and/or position of the audio information, converts the audio information into corresponding text information, combines the text information obtained by conversion and the original text information of the first object display content into new object display content, sends the new object display content to the translation end, after the translation end obtains the new translation content, the translated content is sent to a content conversion end, the content conversion end extracts corresponding audio translation words by using audio attribute information, the audio translation words are converted into audio to obtain translated audio, and determining or generating corresponding language data according to the translated audio and the translated content (the translated words corresponding to the text information) of the translated words of the extracted audio so as to ensure the accuracy and the integrity of the obtained language data.
Alternatively, the present implementation may select the most accurate content from each initial translation content in a sentence number, and/or associated symbol ratio peer-to-peer manner. Optionally, a translation end with high reliability may be selected as an evaluation end, and each initial translation content is evaluated by the evaluation end to obtain a content with the highest accuracy; the accuracy of the determined target translation content is further ensured.
S15, storing the newly added language data into a preset file; the preset file may be stored in a preset database.
And S15, storing the newly added language data into a preset file of the target object, so that when the target object is displayed next time by adopting any one of the newly added languages, the corresponding language data can be directly read from the preset file for loading and displaying, on the basis of responding to the language used by the user, the subsequent display efficiency can be improved, the resource occupied in the subsequent display process can be reduced, and the operation rate of the corresponding intelligent terminal in the process can be improved.
Optionally, the preset file is named by the newly added mark information; the step of S15 includes:
setting data mark information of the newly added language data according to the newly added mark information and the language mark information corresponding to the newly added language data;
and storing the newly added language data carrying the data mark information into a preset file.
The data flag information corresponds to each language data (or each part of language data) in the newly added language data one to one, and is a unique flag of the language data, for example, if the object flag information of the target object includes the name and the version number of the target object, the name and the version number of the target object may be set as the newly added flag information of the corresponding preset file, so that the newly added flag information is matched with the object flag information of the target object, and subsequent searching is facilitated. The data flag information of each language data in the newly added language data includes newly added flag information corresponding to the preset file and language flag information representing a corresponding language, for example, if the newly added flag information of the preset file of a certain target object is key and the language flag information of a certain language is value, the data flag information of the corresponding language data may be key-value. Therefore, the data mark information of each language data in the newly added language data corresponds to the corresponding newly added mark information, so that the language data and the preset file can be quickly matched according to the data mark information and the newly added mark information.
Optionally, the processing method further includes:
s16, responding to the display instruction of the second language, and searching the display data of the target object represented by the second language;
s17, loading display data for the target object to display the target object in the second language.
Optionally, the searching for display data representing the target object in the second language includes: and searching display data representing the target object by adopting the second language in a preset database.
The second language may be one of the language resources recorded by the preset database, such as one of the languages recorded for the initial language profile or one of the languages recorded for the preset file. The display instruction may include a display instruction generated by the target object in response to any kind of display operation input by the user and/or a display instruction determined or generated by at least one application program in the corresponding intelligent terminal, and the like. For example, the target object may monitor a second language input by the user at the language display interface; and monitoring a language switching instruction corresponding to a second language matched with a certain position by a user, wherein the position can be a position output by a positioning program of the intelligent terminal where the target object is located.
Optionally, if the second language is out of the resource range (optionally, the second language is out of the resource range of the preset database), if the resource of the second language is not recorded in the initial language configuration file and the preset file, the second language may be used as a new language, the resource new adding instruction of the second language is determined or generated, the steps S13 and S14 are executed again to obtain the display data, and the display instruction of the second language is responded.
The implementation mode loads the display data for the target object, so that the target object adopts the second language to display each display interface of the target object, the second language display requirement of a user is responded, the flexibility of the target object in the language switching process can be improved, the corresponding user experience is improved, and the target object is easier to be accepted by various groups in various regions. Optionally, the target object may perform a corresponding process through at least one display interface, for example, the intelligent terminal displays a power-on process through a power-on interface, displays a power-off process through a power-off interface, displays a communication process through each chat interface and/or address book interface, and displays a shopping process through a home page display interface, each brand goods display interface, and/or each goods purchasing interface. The target object is displayed according to the loaded display data, so that when the target object displays each display interface, characters corresponding to the second language can be displayed in the display area, prompt information expressed by the second language is popped up, and/or voice corresponding to the second language is played, so that the display effect corresponding to the second language is ensured in all aspects.
Optionally, in step S16, the preset database includes an initial language configuration file and a preset file. The target object may first traverse the preset file to search for the display data, and if the display data is not found, traverse the initial language configuration file to search for the language data of each part in the preset database.
Optionally, if the access right of the preset file matches with the smart newly added language function of the target object, that is, the target object has the smart newly added language function and the right to access the preset file in the smart newly added mode, and does not have the smart newly added language function and the right to access the preset file in the non-smart newly added mode. At the moment, if the target object is in the intelligent newly-added mode, after a display instruction of a second language is monitored, display data are searched for in a preset file and an initial language configuration file respectively; if the display data is not found, the second language may be used as the new language, and the steps S13 and S14 are executed to determine or generate the display data. And/or if the display data is in the non-intelligent newly added mode, only searching the display data in the initial language configuration file after monitoring the display instruction of the second language, and if the display data is not found, loading default language data.
Optionally, each display interface of the target object is displayed through text information and/or audio information; the display data comprises text translation data corresponding to the text information and/or audio translation data corresponding to the audio information;
the step of S17 includes: and displaying each display interface of the target object, loading corresponding text translation data for a preset display area of each display interface so as to display text information of the preset display area by adopting a second language, and/or loading corresponding audio translation data at the voice display time of each display interface so as to play corresponding audio information at the voice display time by adopting the second language.
The first object display content may include at least one unit content, and each unit content may carry information such as logo information of a corresponding display interface, display position and/or display timing of the corresponding display interface, and the like; correspondingly, the text translation data carries the mark information of the corresponding display interface and the display position of the corresponding display interface, and the audio translation data carries the mark information of the corresponding display interface and the display timing relative to the corresponding display interface, so that the target object can display the corresponding text translation data in each text display area when displaying each display interface, for example, as shown in fig. 8d after switching to english (british) for displaying the display interface shown in fig. 8a, and can play the corresponding audio information at each voice display timing to ensure the display effect of the target object.
Optionally, before the step of S13, the processing method further includes:
s11, setting a preset file of the target object; the preset file is used for storing language data which represents the target object and exceeds the recording range of the initial language configuration file of the target object.
The implementation mode can preset a preset file in the installation process, the initialization process or other object setting processes and other occasions to record other language data which are not recorded by the initial language configuration file and represent the target object, so that when the target object monitors a display instruction or a switching instruction of a certain language, the newly added language data can be searched in the preset file, the required language data can be directly loaded after being found, and the loading efficiency is improved.
Optionally, after the preset file of the target object is set, the access authority of the preset file can be configured; for example, it may be set that the running process of the target object can access the preset file, or the preset file can be accessed only in some modes (such as an intelligent addition mode) of the target object, and so on.
Optionally, the target object may set icons of each part of the language data according to language signs of various languages, display corresponding language data on a display interface of the preset file through each icon, and if the language data included in the preset file is too much and is difficult to be displayed on one display interface, the preset file may include at least one display interface to display the icon of the corresponding language data on each display interface. For example, referring to fig. 6B, a display interface of the preset file is shown, which displays icons of language data corresponding to languages a, B, C, D, E, F, G, H and I, respectively; the display interface shown in fig. 6b is an interface corresponding to the middle of the preset file, and both the left and right sides of the display interface include an interface for displaying other language data icons.
Optionally, the processing method further includes:
when the target object is updated, acquiring updated second object display content, and updating language data of each part of the preset file according to the second object display content;
and/or the presence of a gas in the gas,
when the object mark information of the target object is updated, updating the newly added mark information of the preset file according to the updated object mark information, and respectively updating the data mark information of each part of language data in the preset file according to the updated newly added mark information.
The implementation mode can monitor the updating behaviors of version upgrading and the like of the target object to acquire the object display content change condition and/or the object mark information condition of the target object in the updating process, and update each part of language data and/or newly added mark information of the preset file according to various change conditions, so that each part of language data, each data mark information and/or newly added mark information of the preset file are matched with the updated target object.
Optionally, if the target object only updates the display content, the updated display content of the second object may be obtained, and the language data of each part of the preset file is updated according to the display content of the second object, so as to ensure the accuracy of the content represented by the language data of each part. And/or, if the target object only updates the object mark information (such as version number and the like), the updated object mark information can be acquired at this time, the newly added mark information is updated according to the updated object mark information, and the data mark information of each part of the language data in the preset file is respectively updated according to the updated newly added mark information, so that the data mark information, the newly added mark information and the object mark information of each part of the language data are matched with each other, and the accuracy of the relevant matching or searching process is ensured. And/or if the target object updates the display content and the object mark information at the same time, acquiring the updated second object display content, and updating each part of language data according to the second object display content; and updating the newly added mark information according to the updated object mark information, and respectively updating the data mark information of each part of language data in the preset file according to the updated newly added mark information.
Optionally, the process of updating the partial language data may include: and identifying the updating content of the display content of the second object, acquiring various language translation contents corresponding to the updating content, and fusing the various language translation contents and the corresponding language data in the preset file to realize the updating of the language data of each part. In another example, the above process of updating the partial language data may also include: and translating the whole second object display content by adopting various languages respectively to obtain various updated language data representing the second object display content, and replacing the original language data of each part in the preset file by adopting the updated language data so as to update the language data of each part.
In the processing method provided by this embodiment, by responding to the resource addition instruction of the first language, when the first language exceeds the resource range of the target object, the language beyond the range is determined as the added language, the target object is translated by using each added language, the added language data representing the target object by using each added language is determined or generated, and the added language data is stored in the preset file.
On the basis of the above embodiments, a fourth embodiment of the present application provides a processing system, including:
the determining module may be configured to respond to a resource addition instruction in a first language, and determine, when the first language exceeds a resource range of a target object, a language beyond the range as an addition language;
the second determining or generating module may be configured to determine or generate new language data representing the target object by using each new language translation target object;
and the storage module can be used for storing the newly-added language data to a preset file.
Optionally, the first language is beyond the resource scope of the target object, including: the first language is beyond the resource scope of the preset database record of the target object.
For the specific limitations of the processing system, reference may be made to the limitations of the processing method above, which are not described herein again. The various modules in the processing system may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
The application also provides an intelligent terminal, which comprises a memory and a processor, wherein the memory is stored with a processing program, and the processing program is executed by the processor to realize the steps of the processing method in any embodiment.
The present application also provides a computer-readable storage medium, on which a processing program is stored, which, when executed by a processor, implements the steps of the processing method in any of the above embodiments.
In the embodiments of the intelligent terminal and the computer-readable storage medium provided in the present application, all technical features of any one of the embodiments of the processing method may be included, and the expanding and explaining contents of the specification are basically the same as those of the embodiments of the method, and are not described herein again.
Embodiments of the present application also provide a computer program product, which includes computer program code, when the computer program code runs on a computer, the computer is caused to execute the method in the above various possible implementation manners.
The embodiments of the present application also provide a chip, which includes a memory and a processor, where the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that a device in which the chip is installed executes the method in the above various possible implementation manners.
It is to be understood that the foregoing scenarios are only examples, and do not constitute a limitation on application scenarios of the technical solutions provided in the embodiments of the present application, and the technical solutions of the present application may also be applied to other scenarios. For example, as can be known by those skilled in the art, with the evolution of system architecture and the emergence of new service scenarios, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The units in the device in the embodiment of the application can be merged, divided and deleted according to actual needs.
In the present application, the same or similar term concepts, technical solutions and/or application scenario descriptions will be generally described only in detail at the first occurrence, and when the description is repeated later, the detailed description will not be repeated in general for brevity, and when understanding the technical solutions and the like of the present application, reference may be made to the related detailed description before the description for the same or similar term concepts, technical solutions and/or application scenario descriptions and the like which are not described in detail later.
In the present application, each embodiment is described with emphasis, and reference may be made to the description of other embodiments for parts that are not described or illustrated in any embodiment.
The technical features of the technical solution of the present application may be arbitrarily combined, and for brevity of description, all possible combinations of the technical features in the embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, the scope of the present application should be considered as being described in the present application.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, a controlled terminal, or a network device) to execute the method of each embodiment of the present application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, memory Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (12)

1. A method of processing, comprising:
s36, responding to a display instruction of a first language, when the first language exceeds the resource range of a target object, translating the target object by adopting the first language, and determining or generating display data for representing the target object by adopting the first language;
s37, loading the display data for the target object, and displaying the target object by adopting the first language.
2. The method according to claim 1, wherein the obtaining of the display instruction comprises:
responding to a switching instruction of a first language;
if the target object starts an intelligent language switching function, determining or generating a display instruction of the first language; and/or the presence of a gas in the gas,
if the target object does not start the intelligent switching language function, outputting prompt information for starting the intelligent switching language function, and determining or generating a display instruction of the first language after responding to a starting instruction corresponding to the prompt information.
3. The method according to claim 1, wherein in the step S36, translating the target object in the first language, and determining or generating the display data representing the target object in the first language comprises:
sending a first object display content to at least one translation end associated with the first language;
and determining target translation content from the initial translation content provided by the at least one translation terminal, and determining or generating the display data according to the target translation content.
4. The method according to any one of claims 1 to 3, wherein a display interface of the target object is displayed by preset information;
the step of S37 includes: displaying a display interface of the target object, loading corresponding translation data for a preset display area of the display interface, and displaying preset information of the preset display area by adopting the first language.
5. The method according to any one of claims 1 to 3, wherein after the step of S36, the method further comprises:
and S35, storing the display data into a preset file.
6. The method according to claim 5, wherein the default file is named by adding flag information, and the step S35 includes:
setting data mark information of the display data according to the newly added mark information and language mark information corresponding to the display data;
and storing the display data carrying the data mark information to the preset file.
7. A method of processing, comprising:
s13, responding to a new instruction of a first language, and determining the language beyond the range as the new language when the first language exceeds the resource range of the target object;
s14, translating the target object by adopting the new language, and determining or generating new language data representing the target object by the new language;
and S15, saving the newly added language data to a preset file.
8. The method of claim 7, wherein the obtaining of the new instruction comprises:
responding to newly added trigger information;
if the target object starts an intelligent newly-added language function, determining or generating the resource newly-added instruction; and/or the presence of a gas in the gas,
and if the target object does not start the intelligent newly-added language function, outputting prompt information for starting the intelligent newly-added language function, and determining or generating the newly-added instruction after responding to a starting instruction corresponding to the prompt information.
9. The method according to claim 7, wherein the default file is named by adding flag information, and the step S15 includes:
setting data mark information of the newly added language data according to the newly added mark information and language mark information corresponding to the newly added language data;
and storing the newly added language data carrying the data mark information to the preset file.
10. The method according to any one of claims 7 to 9, further comprising:
s16, responding to a display instruction of a second language, and searching display data representing the target object by the second language;
s17, loading the display data for the target object to display the target object in the second language.
11. An intelligent terminal, characterized in that, intelligent terminal includes: memory, processor, wherein the memory has stored thereon a processing program which, when executed by the processor, implements the steps of the method according to any one of claims 1 to 10.
12. A readable storage medium, characterized in that the readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 10.
CN202110915449.0A 2021-08-10 2021-08-10 Processing method, intelligent terminal and storage medium Pending CN113608813A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110915449.0A CN113608813A (en) 2021-08-10 2021-08-10 Processing method, intelligent terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110915449.0A CN113608813A (en) 2021-08-10 2021-08-10 Processing method, intelligent terminal and storage medium

Publications (1)

Publication Number Publication Date
CN113608813A true CN113608813A (en) 2021-11-05

Family

ID=78340144

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110915449.0A Pending CN113608813A (en) 2021-08-10 2021-08-10 Processing method, intelligent terminal and storage medium

Country Status (1)

Country Link
CN (1) CN113608813A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111143010A (en) * 2019-12-26 2020-05-12 深圳Tcl数字技术有限公司 Terminal device control method, terminal device, and storage medium
CN111459602A (en) * 2020-05-21 2020-07-28 深圳万测试验设备有限公司 Multi-language switching method and device of application program and storage medium
CN111857903A (en) * 2020-04-22 2020-10-30 北京嘀嘀无限科技发展有限公司 Display page processing method, device, equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111143010A (en) * 2019-12-26 2020-05-12 深圳Tcl数字技术有限公司 Terminal device control method, terminal device, and storage medium
CN111857903A (en) * 2020-04-22 2020-10-30 北京嘀嘀无限科技发展有限公司 Display page processing method, device, equipment and storage medium
CN111459602A (en) * 2020-05-21 2020-07-28 深圳万测试验设备有限公司 Multi-language switching method and device of application program and storage medium

Similar Documents

Publication Publication Date Title
CN113342234A (en) Display control method, mobile terminal and storage medium
CN107229470B (en) System font switching method, mobile terminal and computer readable storage medium
CN111931155A (en) Verification code input method, verification code input equipment and storage medium
CN108845821B (en) Application program updating method, terminal and computer readable storage medium
CN114416254A (en) Business card display method, intelligent terminal and storage medium
CN114741359A (en) Processing method, intelligent terminal and storage medium
CN113835586A (en) Icon processing method, intelligent terminal and storage medium
CN114119160A (en) Application processing method, mobile terminal and storage medium
CN113901245A (en) Picture searching method, intelligent terminal and storage medium
CN113608813A (en) Processing method, intelligent terminal and storage medium
CN113342246A (en) Operation method, mobile terminal and storage medium
CN113325981A (en) Processing method, mobile terminal and storage medium
CN112363656A (en) Display method, electronic device and readable storage medium
CN114003159A (en) Processing method, intelligent terminal and storage medium
CN115934228A (en) Content adjusting method, intelligent terminal and storage medium
CN114327184A (en) Data management method, intelligent terminal and storage medium
CN114115658A (en) Page processing method, intelligent terminal and storage medium
CN114756320A (en) Display method, terminal and storage medium
CN113448674A (en) Interface display method, mobile terminal and storage medium
CN115826839A (en) Display control method, intelligent terminal and storage medium
CN113805767A (en) Information processing method, mobile terminal and storage medium
CN115617233A (en) Screen projection method, terminal device and storage medium
CN115756240A (en) Editing method, intelligent terminal and storage medium
CN115098204A (en) Widget display method, intelligent terminal and storage medium
CN114302007A (en) Information processing method, intelligent terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination