WO2023010705A1 - 数据处理方法、移动终端及存储介质 - Google Patents

数据处理方法、移动终端及存储介质 Download PDF

Info

Publication number
WO2023010705A1
WO2023010705A1 PCT/CN2021/129675 CN2021129675W WO2023010705A1 WO 2023010705 A1 WO2023010705 A1 WO 2023010705A1 CN 2021129675 W CN2021129675 W CN 2021129675W WO 2023010705 A1 WO2023010705 A1 WO 2023010705A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
information
resource
user
mobile terminal
Prior art date
Application number
PCT/CN2021/129675
Other languages
English (en)
French (fr)
Inventor
崔娜娜
朱文治
汪自蒋
Original Assignee
上海传英信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海传英信息技术有限公司 filed Critical 上海传英信息技术有限公司
Publication of WO2023010705A1 publication Critical patent/WO2023010705A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/16File or folder operations, e.g. details of user interfaces specifically adapted to file systems
    • G06F16/168Details of user interfaces specifically adapted to file systems, e.g. browsing and visualisation, 2d or 3d GUIs

Definitions

  • the present application relates to the technical field of data processing, and in particular to a data processing method, a mobile terminal and a storage medium.
  • applications in mobile terminals can generate photo collections based on information such as time, location, or people.
  • the photo collection generated by the above technical solution is relatively fixed. It has no emotional color or story line, and/or cannot be combined with the current scene of the user, and lacks emotional interaction with the user, which in turn affects the user experience.
  • the present application provides a data processing method, a mobile terminal and a readable storage medium, combining the scene where the user is in to determine the information that can meet the required parameters, and then generate and obtain the target resource that is determined to be displayed to the user, so that the displayed
  • the content is story-like and can combine the user's scene and demand parameters.
  • the present application provides a data processing method applied to mobile terminals, including:
  • Target information is determined according to the target scene, and a display target resource is generated or determined according to the target information.
  • the displaying the target resource when one piece of application information corresponds to at least one target scene, includes displaying the target resource corresponding to each of the target scenes;
  • the displaying the target resource includes displaying the target resource corresponding to the target scene;
  • the displaying the target resource includes displaying at least one target resource.
  • the target resource includes a folder
  • the displaying the target resource includes displaying a folder, and/or displaying at least one parallel folder, and/or displaying a parent folder and a subfolder.
  • the target information is modifiable information, and when it is detected that the target information is changed, the displayed target resource is changed according to the changed target information.
  • the displayed target resource is changed according to the changed application program information, or the displayed target resource is not changed.
  • the changed target resource is displayed in a subfolder of the target resource, or in a parallel folder.
  • the step of determining target information according to the target scene includes:
  • the target information includes at least one of travel information, exercise health information, social information, and weather information.
  • the step of generating or determining to display target resources according to the target information includes:
  • the step of generating or determining to display target resources according to the target information it includes:
  • the target information is adjusted to adjust the target resource.
  • the present application also provides a mobile terminal, including: a memory and a processor, wherein a data processing program is stored in the memory, and when the data processing program is executed by the processor, the steps of the above data processing method are implemented.
  • the present application also provides a computer storage medium, where the computer storage medium stores a computer program, and when the computer program is executed by a processor, the steps of the above-mentioned data processing method are realized.
  • this application discloses a data processing method, a mobile terminal, and a readable storage medium.
  • the data processing method of this application is applied to a mobile terminal.
  • This application obtains at least one application program information of the mobile terminal, and Determine at least one target scene according to the application program information; determine target information according to the target scene, and generate or determine a display target resource according to the target information.
  • the target resource is generated and displayed in combination with the target scene where the user is located, and the emotional interaction between the displayed content and the user is increased, thereby improving the user experience.
  • FIG. 1 is a schematic diagram of a hardware structure of a mobile terminal implementing various embodiments of the present application
  • FIG. 2 is a system architecture diagram of a communication network provided by an embodiment of the present application.
  • Fig. 3 is a schematic flowchart of a data processing method according to the first embodiment.
  • first, second, third, etc. may be used herein to describe various information, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from one another. Without departing from the scope of this document, first information may also be called second information, and similarly, second information may also be called first information.
  • first information may also be called second information, and similarly, second information may also be called first information.
  • word “if” as used herein may be interpreted as “at” or “when” or “in response to a determination”.
  • the singular forms "a”, “an” and “the” are intended to include the plural forms as well, unless the context indicates otherwise.
  • A, B, C means “any of the following: A; B; C; A and B; A and C; B and C; A and B and C", another example, "A, B or C” or "A, B and/or C” means "any of the following: A; B; C; A and B; A and C; B and C; A and B and C". Exceptions to this definition will only arise when combinations of elements, functions, steps or operations are inherently mutually exclusive in some way.
  • the words “if”, “if” as used herein may be interpreted as “at” or “when” or “in response to determining” or “in response to detecting”.
  • the phrases “if determined” or “if detected (the stated condition or event)” could be interpreted as “when determined” or “in response to the determination” or “when detected (the stated condition or event) )” or “in response to detection of (stated condition or event)”.
  • step codes such as S11 and S12 are used, the purpose of which is to express the corresponding content more clearly and concisely, and does not constitute a substantive limitation on the order.
  • S12 will be executed first and then S11, etc., but these should be within the scope of protection of this application.
  • Mobile terminals may be implemented in various forms.
  • the mobile terminals described in this application may include mobile phones, tablet computers, notebook computers, palmtop computers, personal digital assistants (Personal Digital Assistant, PDA), portable media players (Portable Media Player, PMP), navigation devices, wearable devices, smart bracelets, pedometers and other mobile terminals, as well as fixed terminals such as digital TVs and desktop computers.
  • PDA Personal Digital Assistant
  • PMP portable media players
  • navigation devices wearable devices, smart bracelets, pedometers and other mobile terminals
  • wearable devices wearable devices
  • smart bracelets smart bracelets
  • pedometers pedometers and other mobile terminals
  • fixed terminals such as digital TVs and desktop computers.
  • a mobile terminal will be taken as an example, and those skilled in the art will understand that, in addition to elements specially used for mobile purposes, the configurations according to the embodiments of the present application can also be applied to fixed-type terminals.
  • FIG. 1 is a schematic diagram of a hardware structure of a mobile terminal implementing various embodiments of the present application.
  • the mobile terminal 100 may include: an RF (Radio Frequency, radio frequency) unit 101, a WiFi module 102, an audio output unit 103, a /V (audio/video) input unit 104 , sensor 105 , display unit 106 , user input unit 107 , interface unit 108 , memory 109 , processor 110 , and power supply 111 and other components.
  • RF Radio Frequency, radio frequency
  • the radio frequency unit 101 can be used for sending and receiving information or receiving and sending signals during a call. Specifically, after receiving the downlink information of the base station, it is processed by the processor 110; in addition, the uplink data is sent to the base station.
  • the radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency unit 101 can also communicate with the network and other devices through wireless communication.
  • the above wireless communication can use any communication standard or protocol, including but not limited to GSM (Global System of Mobile communication, Global System for Mobile Communications), GPRS (General Packet Radio Service, general packet radio service), CDMA2000 (Code Division Multiple Access 2000, Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access, Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access, Time Division Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division Duplexing- Long Term Evolution, frequency division duplex long-term evolution) and TDD-LTE (Time Division Duplexing-Long Term Evolution, Time Division Duplexing Long Term Evolution) and so on.
  • GSM Global System of Mobile communication, Global System for Mobile Communications
  • GPRS General Packet Radio Service, general packet radio service
  • CDMA2000 Code Division Multiple Access 2000, Code Division Multiple Access 2000
  • WCDMA Wideband Code Division Multiple Access
  • TD-SCDMA Time Division-Synchronous Code Division Multiple Access,
  • WiFi is a short-distance wireless transmission technology.
  • the mobile terminal can help users send and receive emails, browse web pages, and access streaming media through the WiFi module 102, which provides users with wireless broadband Internet access.
  • Fig. 1 shows the WiFi module 102, it can be understood that it is not an essential component of the mobile terminal, and can be completely omitted as required without changing the essence of the invention.
  • the audio output unit 103 can store the audio received by the radio frequency unit 101 or the WiFi module 102 or in the memory 109 when the mobile terminal 100 is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, or the like.
  • the audio data is converted into an audio signal and output as sound.
  • the audio output unit 103 can also provide audio output (call signal reception sound, message reception sound, etc.) related to a specific function performed by the mobile terminal 100 .
  • the audio output unit 103 may include a speaker, a buzzer, and the like.
  • the A/V input unit 104 is used to receive audio or video signals.
  • A/V input unit 104 may include a graphics processor (Graphics Processing Unit (GPU) 1041 and a microphone 1042, the graphics processor 1041 processes image data of still pictures or videos obtained by an image capture device (such as a camera) in video capture mode or image capture mode. The processed image frames may be displayed on the display unit 106 .
  • the image frames processed by the graphics processor 1041 may be stored in the memory 109 (or other storage media) or sent via the radio frequency unit 101 or the WiFi module 102 .
  • the microphone 1042 may receive sound (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, and the like operating modes, and can process such sound as audio data.
  • the processed audio (voice) data can be converted into a format that can be sent to a mobile communication base station via the radio frequency unit 101 for output in the case of a phone call mode.
  • the microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the process of receiving and transmitting audio signals.
  • the mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor includes an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust the brightness of the display panel 1061 according to the brightness of the ambient light, and the proximity sensor can turn off the display when the mobile terminal 100 moves to the ear. panel 1061 and/or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (generally three axes), and can detect the magnitude and direction of gravity when it is stationary, and can be used for applications that recognize the posture of mobile phones (such as horizontal and vertical screen switching, related Games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tap), etc.; as for mobile phones, fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, Other sensors such as thermometers and infrared sensors will not be described in detail here.
  • the display unit 106 is used to display information input by the user or information provided to the user.
  • the display unit 106 may include a display panel 1061 , and the display panel 1061 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an organic light-emitting diode (Organic Light-Emitting Diode, OLED), or the like.
  • LCD Liquid Crystal Display
  • OLED Organic Light-Emitting Diode
  • the user input unit 107 can be used to receive input numbers or character information, and generate key signal input related to user settings and function control of the mobile terminal.
  • the user input unit 107 may include a touch panel 1071 and other input devices 1072 .
  • the touch panel 1071 also referred to as a touch screen, can collect touch operations of the user on or near it (for example, the user uses any suitable object or accessory such as a finger or a stylus on the touch panel 1071 or near the touch panel 1071). operation), and drive the corresponding connection device according to the preset program.
  • the touch panel 1071 may include two parts, a touch detection device and a touch controller.
  • the touch detection device detects the user's touch orientation, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into contact coordinates , and then sent to the processor 110, and can receive the command sent by the processor 110 and execute it.
  • the touch panel 1071 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave.
  • the user input unit 107 may also include other input devices 1072 .
  • other input devices 1072 may include, but are not limited to, one or more of physical keyboards, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, etc., which are not specifically described here. limited.
  • the touch panel 1071 may cover the display panel 1061.
  • the touch panel 1071 detects a touch operation on or near it, it transmits to the processor 110 to determine the type of the touch event, and then the processor 110 determines the touch event according to the touch event.
  • the corresponding visual output is provided on the display panel 1061 .
  • the touch panel 1071 and the display panel 1061 are used as two independent components to realize the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 can be integrated.
  • the implementation of the input and output functions of the mobile terminal is not specifically limited here.
  • the interface unit 108 serves as an interface through which at least one external device can be connected with the mobile terminal 100 .
  • an external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, audio input/output (I/O) ports, video I/O ports, headphone ports, and more.
  • the interface unit 108 can be used to receive input from an external device (for example, data information, power, etc.) transfer data between devices.
  • the memory 109 can be used to store software programs as well as various data.
  • the memory 109 can mainly include a program storage area and a data storage area.
  • the program storage area can store an operating system, at least one application program required by a function (such as a sound playback function, an image playback function, etc.) and the like;
  • the storage data area can be Stores data (such as audio data, phonebook, etc.) created according to the use of the mobile phone, etc.
  • the memory 109 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage devices.
  • the processor 110 is the control center of the mobile terminal, and uses various interfaces and lines to connect various parts of the entire mobile terminal, by running or executing software programs and/or modules stored in the memory 109, and calling data stored in the memory 109 , execute various functions of the mobile terminal and process data, so as to monitor the mobile terminal as a whole.
  • the processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor and a modem processor.
  • the application processor mainly processes operating systems, user interfaces, and application programs, etc.
  • the demodulation processor mainly handles wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 110 .
  • the mobile terminal 100 may also include a power source 111 (such as a battery) for supplying power to various components.
  • a power source 111 such as a battery
  • the power source 111 may be logically connected to the processor 110 through a power management system, so as to manage charging, discharging, and power consumption through the power management system. and other functions.
  • the mobile terminal 100 may also include a Bluetooth module, etc., which will not be repeated here.
  • the following describes the communication network system on which the mobile terminal of the present application is based.
  • FIG. 2 is a structure diagram of a communication network system provided by an embodiment of the present application.
  • the communication network system is an LTE system of general mobile communication technology, and the LTE system includes UEs (User Equipment, User Equipment, ) 201, E-UTRAN (Evolved UMTS Terrestrial Radio Access Network, Evolved UMTS Terrestrial Radio Access Network) 202, EPC (Evolved Packet Core, Evolved Packet Core Network) 203 and the operator's IP service 204.
  • UEs User Equipment, User Equipment,
  • E-UTRAN Evolved UMTS Terrestrial Radio Access Network
  • EPC Evolved Packet Core, Evolved Packet Core Network
  • the UE 201 may be the above-mentioned terminal 100, which will not be repeated here.
  • E-UTRAN 202 includes eNodeB 2021 and other eNodeB 2022 and so on.
  • the eNodeB 2021 can be connected to other eNodeB 2022 through a backhaul (for example, X2 interface), the eNodeB 2021 is connected to the EPC 203 , and the eNodeB 2021 can provide access from the UE 201 to the EPC 203 .
  • a backhaul for example, X2 interface
  • EPC203 may include MME (Mobility Management Entity, mobility management entity) 2031, HSS (Home Subscriber Server, home user server) 2032, other MME2033, SGW (Serving Gate Way, serving gateway) 2034, PGW (PDN Gate Way, packet data network gateway) 2035 and PCRF ( Policy and Charging Rules Function, policy and tariff function entity) 2036, etc.
  • MME2031 is a control node that processes signaling between UE201 and EPC203, and provides bearer and connection management.
  • HSS2032 is used to provide some registers to manage functions such as the home location register (not shown in the figure), and save some user-specific information about service characteristics and data rates.
  • PCRF2036 is the policy and charging control policy decision point of service data flow and IP bearer resources, it is the policy and charging execution function A unit (not shown) selects and provides available policy and charging control decisions.
  • the IP service 204 may include Internet, Intranet, IMS (IP Multimedia Subsystem, IP Multimedia Subsystem) or other IP services.
  • IMS IP Multimedia Subsystem, IP Multimedia Subsystem
  • LTE system is used as an example above, those skilled in the art should know that this application is not only applicable to the LTE system, but also applicable to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA and future new wireless communication systems.
  • the network system, etc. are not limited here.
  • the first embodiment provides a data processing method, which includes the following steps (S11-S13):
  • Step S11 acquiring at least one application program information of the mobile terminal, and determining at least one target scene according to the application program information;
  • the data processing method described in this application is applied to a mobile terminal, and the mobile terminal includes a smart phone, a smart watch or bracelet, a tablet computer, etc., and the application to a smart phone (referred to as a mobile phone) is used as an example for illustration below.
  • the application information includes order information in the user's shopping application, such as a ticket or Ticket reservation information, hotel reservation information, etc., also includes the user's current location information, contact and call information, schedule and to-do information in the memo, usage information of various applications, mobile phone usage time information, network connection status, IOT data, data in terminals such as bracelets or tablets connected to mobile phones, viewing history of entertainment applications such as music and videos, etc.
  • order information in the user's shopping application such as a ticket or Ticket reservation information, hotel reservation information, etc.
  • the user's current location information such as a ticket or Ticket reservation information, hotel reservation information, etc.
  • the user's current location information such as a ticket or Ticket reservation information, hotel reservation information, etc.
  • the user's current location information such as a ticket or Ticket reservation information, hotel reservation information, etc.
  • the target scene where the user is located includes a current scene where the user is located and a scene where the user may be located in the future.
  • Step S12 determining target information according to the target scene, and generating or determining a display target resource according to the target information.
  • target information that can meet the requirement parameters is determined by evaluating the scene where the user is in, and the target resource to be displayed is generated or determined according to the target information.
  • the target resource may be a photo album, or a slideshow or video generated from a photo album or pictures in the photo album.
  • displaying the target resource includes separately displaying target resources corresponding to each target scene;
  • displaying the target resource includes displaying at least one target resource corresponding to the target scene;
  • displaying the target resource includes respectively displaying at least one target resource corresponding to each target scene.
  • the target resource can be displayed in the form of a folder.
  • displaying the target resource includes displaying a folder, optionally, displaying at least one parallel folder, and optionally displaying a parent file folders and subfolders.
  • the generated photo collection can be displayed in the form of a folder, and the photo collection generated according to the same target scene can be used as a file Folders, at least one photo collection of different time periods or different types generated under the same target scene can be displayed in the same folder, and photo collections generated according to different scenes are displayed in different folders.
  • the way of classifying the folders of the album and the way of dividing parent folders and subfolders is not limited thereto, and no specific limitation is made here.
  • the target information that can meet the required parameters can be changeable configuration information.
  • the displayed target resource is changed according to the changed target information; the application program information in the mobile terminal With the user's use of the mobile terminal, etc. will also change.
  • the target resource that has been displayed is variable or unchanged.
  • the newly generated The target resource appears in a subfolder or parallel folder of the original target resource.
  • This application obtains at least one application program information of the mobile terminal, and determines the target scene where at least one user is in according to the application program information; determines the target information according to the target scene, and generates or determines the display target resource according to the target information .
  • generating or determining the target resources to be displayed combined with the user's scene to generate or display resources that can meet the required parameters, so that the displayed resources have a story, which can increase the emotional interaction with the user, thereby improving the user experience.
  • a second embodiment of the data processing method of the present application is proposed.
  • the photo collection is used as an example for illustration.
  • step S11 of the above-mentioned embodiment according to the user
  • the target scenario determines the refinement of the target information that can meet the required parameters, including:
  • Step a1 Estimate demand parameters according to the target scene, and determine target information that meets the demand parameters.
  • the target information includes at least one of travel information, exercise health information, social information, and weather information.
  • the user's current emotion and/or possible future emotion are estimated according to the target scene where the user is located, and then the target information that can meet the demand parameters is determined.
  • the acquired application information is firstly integrated and correlated to identify the current and/or future scenes of the user, so as to determine the target scene of the user.
  • the target scene includes At least one of travel dynamics, sports health dynamics, social dynamics and weather conditions. Estimate demand parameters according to the target scene to determine target information that can meet the demand parameters.
  • the target information includes at least one of travel information, exercise health information, social information and weather information.
  • the scenarios where the user may be located include at least one of the user's travel dynamics, sports and health dynamics, social dynamics and weather conditions.
  • the user's own scene is determined by the user's own internal factors, such as the user's travel dynamics, sports and health dynamics, and social dynamics, etc., as well as external factors that determine the user's possible scenario, such as weather conditions, etc. .
  • external factors that determine the user's possible scenario, such as weather conditions, etc.
  • the factors that meet the demand parameters it is first necessary to estimate the possible emotions of the user according to the recognized scene where the user may be, and then determine the factors that meet the demand parameters according to the demand parameters.
  • the factors that can meet the demand parameters include at least one of the user's travel information, exercise health information, social information and weather information.
  • the user's travel information can be used as the target information that empathizes with the user, and the information related to the user's travel can be extracted from the application information.
  • the reservation information determines the user's travel mode, travel destination and travel date, etc., and can also be further combined with weather conditions to determine whether the weather at the user's departure and destination on the day of travel is good or bad, and whether it will affect travel.
  • different empathy models need to be used to generate photo sets containing different emotional colors, so as to Generate more emotional interaction with users.
  • the generated photo collection is displayed to the user, and the way of displaying the generated photo collection to the user includes displaying in the user's mobile phone photo album.
  • a The photo collection of the user can be played in the form of video or slideshow; when the user’s mobile phone is locked and the screen is detected to be turned on, the pictures in the collection can be scrolled on the screen of the mobile phone in the form of a slideshow, which will not be described here. Specific limits.
  • steps b1-b2 are the refinement steps of using the empathy model to generate target resources, that is, photo sets:
  • Step b1 converting the target information into resource query conditions
  • Step b2 retrieving resource information in the mobile terminal according to the resource query condition, and using the retrieved resource information to generate a target resource.
  • the preset empathy model is used to convert the target information that can empathize with the user into resource query conditions, and retrieve qualified resource information from the mobile terminal according to the resource query conditions , and generate the target resource based on the retrieved resource information.
  • the extracted target information when using the empathy model to generate a photo collection, the extracted target information must first be converted into resource query conditions, and then according to the resource query conditions, the user's mobile terminal (or is a mobile phone), search for pictures that meet the conditions, and then use the retrieved pictures to generate a corresponding photo album.
  • different target information corresponds to different resource query conditions.
  • the processing methods for pictures may also be different. Therefore, different target information needs to be determined according to different target information.
  • Empathy model and then generate photo sets corresponding to different target information, and generate more emotional interaction with users. It should be noted that, when retrieving pictures in the user's mobile phone, the retrieved pictures are not limited to pictures stored locally in the photo album of the user's mobile phone, and may also be online pictures in applications in the user's mobile phone.
  • steps S13-S14 are also included:
  • Step S13 obtaining user feedback information, and determining the user's preset level for the target resource according to the obtained feedback information
  • Step S14 when it is detected that the user's preset level of the target resource is lower than or equal to a preset threshold, adjust the target information so as to adjust the target resource.
  • the acquired user feedback information includes instructions triggered by the user's operations when watching the display resource, such as clicking on the album to view more, pause or replay, etc., and also includes information such as the user's viewing time.
  • the photo collection after obtaining the user's feedback information, determine the user's preset level for the generated photo collection according to the user's feedback information.
  • the user When it is detected that the user's preset level for the generated photo collection is lower than or equal to the preset
  • the threshold for the generated photo collection, the user’s viewing time is relatively short, or clicking “not interested” triggers the blocking command for the photo collection, etc., the scene where the user is in is re-identified, and then the target information and generated
  • the empathy model of the photo collection is adjusted to adjust the generated photo collection until the user's preset level of the generated photo collection meets expectations.
  • the photo collection sharing command is triggered.
  • the generated photo collection is forwarded to social software for sharing.
  • the user watches for a long time or watches the complete photo collection, or even replays the photo collection it can be explained that the generated photo collection has shared with the user. Affection.
  • the target information that can meet the demand parameters is used as the resource query condition, and the resource information that can meet the demand parameters is retrieved, and the target resources are generated based on the retrieved resource information and displayed to the user.
  • the user’s feedback information is obtained, and the user’s preset level of the target resource is determined according to the user’s feedback information.
  • the user’s preset level of the target resource is low , adjust the target information and target resources to further increase the emotional interaction between target resources and users.
  • a third embodiment of the data processing method of the present application is proposed.
  • This embodiment is a refinement of step b1 in the above-mentioned embodiments.
  • the generation or determination of the displayed target resources will be described in detail by taking the photo collection as an example.
  • the refinement of converting target information into resource query conditions includes steps c1-c4:
  • Step c1 when the target information includes travel information, use the empathy model to convert one or more of the travel destination, travel date and travel mode in the travel information into resource query conditions.
  • the travel information at least includes travel destination, travel date and travel mode, and at least one of the user's travel information is converted into a resource query condition.
  • generate a corresponding The atlas serves as the user's travel guide, recommending and guiding the user's travel; using the travel date as the resource query condition, in the atlas of the user's mobile phone or the atlas of social software, obtain the pictures of the same period in previous years, and use the date as the story line.
  • the people and/events that are with the user are connected in series.
  • the date is used as the story line, and the pictures corresponding to the place where the user is located on the same date every year are connected in series, and the travel trajectory of the user on the same date every year is recorded.
  • Step c2 when the target information includes at least one of sports health information and weather information, use the empathy model to establish an association model between resource information and climate factors and/or user emotions, and use the established association model
  • resource information includes picture information
  • picture information includes picture tone and picture emotional color
  • climate factors are determined by weather information, including weather type, light intensity, humidity, temperature, visibility, and user emotions are determined by motion Health information OK.
  • the empathy model must first be used to establish an association model between resource information that meets the demand parameter conditions and climate factors and/or user emotions, and then it can be based on the user's Exercise health conditions and/or climate factors, accurately estimate demand parameters, and convert the established association model into resource query conditions.
  • the resource information mainly includes the color tone and emotional color of the picture.
  • the user's exercise health information and/or weather conditions will affect the demand parameters, and pictures with saturated colors and emotional colors can adjust the user's emotions to a certain extent. Therefore, according to the user's exercise health information and/or weather information
  • the demand parameters are estimated, and the corresponding association model is established. According to the estimation results of user emotions, the established association model is used as the resource query condition to retrieve the corresponding pictures and generate a photo album to adjust the demand parameters.
  • climate factors can be determined through weather information, mainly including weather type, light intensity, humidity, temperature, visibility, etc., and also include smog or air quality, etc.
  • weather types include sunny, cloudy, rainy and snowy etc.
  • the pictures in the generated photo collection are mainly pictures with the background of ice and snow or blue sky and white clouds;
  • the collection is mainly based on sunny outdoor pictures and optionally high-saturation pictures; when the visibility is low, the pictures that generate the photo collection are mainly landscape pictures; Emotional pictures based, and so on.
  • the user's mood can also be determined through the user's sports health information.
  • sports health information When judging the user's mood through the sports health information to establish a correlation model between the picture information and the user's mood, it is specifically based on the user's working hours, exercise conditions, sleep duration, and heart rate.
  • Information comprehensively analyze the user's sports and health information, so as to judge the user's fatigue level or mood, when it is detected that the user is very tired after working for a long time, generate a picture of the photo album to include pictures of family members, relatives and friends smiling faces and/or Funny pictures of pets are the main ones; when it is detected that the user is not sleeping well, such as getting up early in the morning or sleeping for a short time, the pictures in the generated album are mainly pictures with high saturation and bright sunshine. Sleeping time at night is dominated by low-saturation, soothing pictures.
  • the extracted target information usually includes both, but when the user's scene is different, the importance of the two may vary.
  • the demand parameter is mainly related to the fatigue degree caused by the working hours, and when the user is about to travel and play, the weather information can be a factor of the user's emotion. Therefore, it is necessary to determine the difference between the two according to the user's scene. and then generate different resource query conditions to retrieve qualified images.
  • Step c3 when the target information includes social information, determine the user's social objects according to the social information;
  • Step c4 use the empathy model to make a portrait of the social object, so as to extract the characteristic information of the social object from the social information, and convert the extracted characteristic information into a resource query condition.
  • the The feature information includes at least one of birthday information, avatar information, interaction frequency, and intimacy.
  • the target information includes social information
  • the user's social information includes at least information in social applications, contacts and call information, SMS information, etc.
  • the characteristic information of social objects includes at least the birthday information of social objects, avatar information, interaction frequency with users, and intimacy etc.
  • their social objects can be divided into frequent contacts and infrequent contacts. For infrequent contacts, when it is detected that infrequent contacts have interacted with users, it is generally more important.
  • the intimacy can be determined by extracting chat content or SMS content information. If two people often share daily life, life pictures or video links, you can It is determined that the relationship between the two is relatives and friends. If the two often share files, and optional words such as "meeting", "report”, and "report” often appear in social information, it can be determined that the relationship between the two is a colleague .
  • the target information when the target information includes multiple factors, it is necessary to conduct a comprehensive analysis of each factor to determine the resource query conditions.
  • the target information can be sorted according to the degree of importance, and the target information with the highest degree of importance is used as the retrieval condition.
  • the information is used as a filter condition for secondary screening of pictures.
  • the target information can be combined and/or adjusted differently according to the actual scene of the user to obtain different resource query conditions, and then obtain different pictures and generate corresponding photo sets.
  • the resource query conditions in the above-mentioned embodiments The generation method is only used to illustrate the embodiment of the present application, and is not used to limit the present application.
  • This implementation determines the corresponding empathy model according to different target information, and uses the empathy model to convert the target information into different resource query conditions, so that different resources can be obtained according to different emotions of the user, so that the generated display resources have Storytelling, while increasing the emotional interaction with users, it also improves the flexibility of display resource generation.
  • the present application also provides a mobile terminal.
  • the mobile terminal includes a memory and a processor, and a data processing program is stored in the memory.
  • the data processing program is executed by the processor, the steps of the data processing method in any of the foregoing embodiments are implemented.
  • the present application also provides a computer-readable storage medium, on which a data processing program is stored, and when the data processing program is executed by a processor, the steps of the data processing method in any of the foregoing embodiments are implemented.
  • the embodiments of the mobile terminal and the computer-readable storage medium provided in this application may contain all the technical features of any of the above-mentioned data processing method embodiments. Do not repeat them.
  • An embodiment of the present application further provides a computer program product, the computer program product includes computer program code, and when the computer program code is run on the computer, the computer is made to execute the methods in the above various possible implementation manners.
  • the embodiment of the present application also provides a chip, including a memory and a processor.
  • the memory is used to store a computer program
  • the processor is used to call and run the computer program from the memory, so that the device installed with the chip executes the above various possible implementation modes. Methods.
  • Units in the device in the embodiment of the present application may be combined, divided and deleted according to actual needs.
  • the methods of the above embodiments can be implemented by means of software plus a necessary general-purpose hardware platform, and of course also by hardware, but in many cases the former is better implementation.
  • the technical solution of the present application can be embodied in the form of a software product in essence or in other words, the part that contributes to the prior art, and the computer software product is stored in one of the above storage media (such as ROM/RAM, magnetic CD, CD), including several instructions to make a terminal device (which may be a mobile phone, computer, server, controlled terminal, or network device, etc.) execute the method of each embodiment of the present application.
  • a computer program product includes one or more computer instructions.
  • a computer can be a general purpose computer, special purpose computer, a computer network, or other programmable apparatus.
  • Computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, e.g.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server, a data center, etc. integrated with one or more available media.
  • Usable media can be magnetic media, (for example, floppy disks, memory disks, magnetic tape), optical media (for example, DVD), or semiconductor media (for example, solid State Disk (SSD)), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Telephone Function (AREA)

Abstract

一种应用于移动终端的数据处理方法、移动终端和存储介质,该方法包括:获取所述移动终端的至少一应用程序信息,并根据所述应用程序信息确定至少一目标场景(S11);根据所述目标场景确定目标信息,根据所述目标信息生成或确定显示目标资源(S12)。该方法通过结合用户所处的目标场景生成或确定要显示的目标资源,增加显示的目标资源与用户之间的情感互动,从而提升用户体验。

Description

数据处理方法、移动终端及存储介质
本申请要求于2021年8月3日申请的、申请号为202110887003.1的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及数据处理技术领域,具体涉及一种数据处理方法、移动终端及存储介质。
背景技术
一些技术实现中,移动终端中的应用可以基于时间、地点或人物等信息生成相集,在构思及实现本申请过程中,发明人发现至少存在如下问题:上述技术方案生成的相集是相对固定的,没有感情色彩或故事线,和/或无法结合用户当下所处的场景,缺少与用户之间的情感互动,进而影响到用户体验。
前面的叙述在于提供一般的背景信息,并不一定构成现有技术。
技术问题
针对上述技术问题,本申请提供一种数据处理方法、移动终端及可读存储介质,结合用户所处的场景确定能够符合需求参数的信息进而生成获取确定要向用户显示的目标资源,使显示的内容具有故事性,能够结合用户所处的场景与需求参数。
技术解决方案
为解决上述技术问题,本申请提供一种数据处理方法,应用于移动终端,包括:
获取所述移动终端的至少一应用程序信息,并根据所述应用程序信息确定至少一目标场景;
根据所述目标场景确定目标信息,根据所述目标信息生成或确定显示目标资源。
在一实施例中,当一个应用程序信息,对应至少一个目标场景时,所述显示目标资源包括显示各所述目标场景对应的目标资源;
或,当至少一个应用程序信息,对应同一个目标场景时,所述显示目标资源包括显示所述目标场景对应的目标资源;
或,当至少一个应用程序信息,对应至少一个目标场景时,所述显示目标资源包括显示至少一个目标资源。
在一实施例中,所述目标资源包括文件夹,所述显示目标资源包括显示一个文件夹,和/或显示至少一个并列文件夹,和/或显示母文件夹和子文件夹。
在一实施例中,所述目标信息为可更改信息,当检测到所述目标信息更改时,根据更改后的目标信息对显示的目标资源进行更改。
在一实施例中,当检测到所述应用程序信息变更时,根据变更后的应用程序信息对显示的目标资源进行变更,或不对显示的目标资源进行变更。
在一实施例中,当检测到显示的目标资源变更时,将变更后的目标资源显示于所述目标资源的子文件夹,或并列文件夹中。
在一实施例中,所述根据所述目标场景确定目标信息的步骤,包括:
根据所述目标场景对需求参数进行预估,确定与需求参数匹配的目标信息,可选地,所述目标信息包括出行信息、运动健康信息、社交信息和天气信息中的至少一种。
在一实施例中,所述根据所述目标信息生成或确定显示目标资源的步骤,包括:
将所述目标信息转化为资源查询条件;
根据所述资源查询条件在所述移动终端中检索资源信息,并利用检索到的资源信息生成或确定显示的目标资源。
在一实施例中,所述根据所述目标信息生成或确定显示目标资源的步骤之后,包括:
获取反馈信息,并根据获取的反馈信息确定所述目标资源的预设等级;
当检测到所述目标资源的预设等级低于或等于预设阈值时,对所述目标信息进行调整,以对所述目标资源进行调整。
本申请还提供一种移动终端,包括:存储器、处理器,其中,所述存储器上存储有数据处理程序,所述数据处理程序被所述处理器执行时实现如上述数据处理方法的步骤。
本申请还提供一种计算机存储介质,所述计算机存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如上述数据处理方法骤。
有益效果
如上所述,本申请公开了一种数据处理方法、移动终端及可读存储介质,本申请的数据处理方法,应用于移动终端,本申请通过获取所述移动终端的至少一应用程序信息,并根据所述应用程序信息确定至少一目标场景;根据所述目标场景确定目标信息,根据所述目标信息生成或确定显示目标资源。通过上述方式,结合用户所处的目标场景生成并显示目标资源,增加显示内容与用户之间的情感互动,进而提升用户体验。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。为了更清楚地说明本申请实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为实现本申请各个实施例的一种移动终端的硬件结构示意图;
图2为本申请实施例提供的一种通信网络系统架构图;
图3是根据第一实施例示出的数据处理方法的流程示意图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。通过上述附图,已示出本申请明确的实施例,后文中将有更详细的描述。这些附图和文字描述并不是为了通过任何方式限制本申请构思的范围,而是通过参考特定实施例为本领域技术人员说明本申请的概念。
本发明的实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
需要说明的是,在本文中,术语“包括”、“包含”可选地其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品可选地装置不仅包括那些要素,而且还包括没有明确列出的其他要素,可选地是还包括为这种过程、方法、物品可选地装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品可选地装置中还存在另外的相同要素,此外,本申请不同实施例中具有同样命名的部件、特征、要素可能具有相同含义,也可能具有不同含义,其具体含义需以其在该具体实施例中的解释可选地进一步结合该具体实施例中上下文进行确定。
应当理解,尽管在本文可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。在不脱离本文范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语"如果"可以被解释成为"在……时"或"当……时"或"响应于确定"。再者,如同在本文中所使用的,单数形式“一”、“一个”和“该”旨在也包括复数形式,除非上下文中有相反的指示。应当进一步理解,术语“包含”、“包括”表明存在所述的特征、步骤、操作、元件、组件、项目、种类、和/或组,但不排除一个或多个其他特征、步骤、操作、元件、组件、项目、种类、和/或组的存在、出现或添加。本申请使用的术语“或”、“和/或”、“包括以下至少一个”等可被解释为包括性的,或意味着任一个或任何组合。“包括以下至少一个:A、B、C”意味着“以下任一个:A;B;C;A和B;A和C;B和C;A和B和C”,再如,“A、B或C”或者“A、B和/或C”意味着“以下任一个:A;B;C;A和B;A和C;B和C;A和B和C”。仅当元件、功能、步骤或操作的组合在某些方式下内在地互相排斥时,才会出现该定义的例外。
应该理解的是,虽然本申请实施例中的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
取决于语境,如在此所使用的词语“如果”、“若”可以被解释成为“在……时”或“当……时”或“响应于确定”或“响应于检测”。类似地,取决于语境,短语“如果确定”或“如果检测(陈述的条件或事件)”可以被解释成为“当确定时”或“响应于确定”或“当检测(陈述的条件或事件)时”或“响应于检测(陈述的条件或事件)”。
需要说明的是,在本文中,采用了诸如S11、S12等步骤代号,其目的是为了更清楚简要地表述相应内容,不构成顺序上的实质性限制,本领域技术人员在具体实施时,可能会先执行S12后执行S11等,但这些均应在本申请的保护范围之内。
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或者“单元”的后缀仅为了有利于本申请的说明,其本身没有特定的意义。因此,“模块”、“部件”或者“单元”可以混合地使用。
移动终端可以以各种形式来实施。本申请中描述的移动终端可以包括诸如手机、平板电脑、笔记本电脑、掌上电脑、个人数字助理(Personal Digital Assistant,PDA)、便捷式媒体播放器(Portable Media Player,PMP)、导航装置、可穿戴设备、智能手环、计步器等移动终端,以及诸如数字TV、台式计算机等固定终端。
后续描述中将以移动终端为例进行说明,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本申请的实施方式的构造也能够应用于固定类型的终端。
请参阅图1,其为实现本申请各个实施例的一种移动终端的硬件结构示意图,该移动终端100可以包括: RF(Radio Frequency,射频)单元101、WiFi模块102、音频输出单元103、A/V(音频/视频)输入单元104、传感器105、显示单元106、用户输入单元107、接口单元108、存储器109、处理器110、以及电源111等部件。本领域技术人员可以理解,图1中示出的移动终端结构并不构成对移动终端的限定,移动终端可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图1对移动终端的各个部件进行具体的介绍:
射频单元101可用于收发信息或通话过程中,信号的接收和发送,具体的,将基站的下行信息接收后,给处理器110处理;另外,将上行的数据发送给基站。通常,射频单元101包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元101还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于GSM (Global System of Mobile communication,全球移动通讯系统)、GPRS(General Packet Radio Service,通用分组无线服务)、CDMA2000(Code Division Multiple Access 2000,码分多址2000)、WCDMA(Wideband Code Division Multiple Access, 宽带码分多址)、TD-SCDMA(Time Division-Synchronous Code Division Multiple Access,时分同步码分多址)、FDD-LTE(Frequency Division Duplexing- Long Term Evolution,频分双工长期演进)和TDD-LTE (Time Division Duplexing- Long Term Evolution,分时双工长期演进)等。
WiFi属于短距离无线传输技术,移动终端通过WiFi模块102可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图1示出了WiFi模块102,但是可以理解的是,其并不属于移动终端的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。
音频输出单元103可以在移动终端100处于呼叫信号接收模式、通话模式、记录模式、语音识别模式、广播接收模式等等模式下时,将射频单元101或WiFi模块102接收的或者在存储器109中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元103还可以提供与移动终端100执行的特定功能相关的音频输出(呼叫信号接收声音、消息接收声音等等)。音频输出单元103可以包括扬声器、蜂鸣器等等。
A/V输入单元104用于接收音频或视频信号。A/V输入单元104可以包括图形处理器(Graphics Processing Unit,GPU)1041和麦克风1042,图形处理器1041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元106上。经图形处理器1041处理后的图像帧可以存储在存储器109(或其它存储介质)中或者经由射频单元101或WiFi模块102进行发送。麦克风1042可以在电话通话模式、记录模式、语音识别模式等等运行模式中经由麦克风1042接收声音(音频数据),并且能够将这样的声音处理为音频数据。处理后的音频(语音)数据可以在电话通话模式的情况下转换为可经由射频单元101发送到移动通信基站的格式输出。麦克风1042可以实施各种类型的噪声消除(或抑制)算法以消除(或抑制)在接收和发送音频信号的过程中产生的噪声或者干扰。
移动终端100还包括至少一种传感器105,比如光传感器、运动传感器以及其他传感器。可选地,光传感器包括环境光传感器及接近传感器,可选地,环境光传感器可根据环境光线的明暗来调节显示面板1061的亮度,接近传感器可在移动终端100移动到耳边时,关闭显示面板1061和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机还可配置的指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
显示单元106用于显示由用户输入的信息或提供给用户的信息。显示单元106可包括显示面板1061,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode, OLED)等形式来配置显示面板1061。
用户输入单元107可用于接收输入的数字或字符信息,以及产生与移动终端的用户设置以及功能控制有关的键信号输入。可选地,用户输入单元107可包括触控面板1071以及其他输入设备1072。触控面板1071,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板1071上或在触控面板1071附近的操作),并根据预先设定的程式驱动相应的连接装置。触控面板1071可包括触摸检测装置和触摸控制器两个部分。可选地,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器110,并能接收处理器110发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板1071。除了触控面板1071,用户输入单元107还可以包括其他输入设备1072。可选地,其他输入设备1072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种,具体此处不做限定。
可选地,触控面板1071可覆盖显示面板1061,当触控面板1071检测到在其上或附近的触摸操作后,传送给处理器110以确定触摸事件的类型,随后处理器110根据触摸事件的类型在显示面板1061上提供相应的视觉输出。虽然在图1中,触控面板1071与显示面板1061是作为两个独立的部件来实现移动终端的输入和输出功能,但是在某些实施例中,可以将触控面板1071与显示面板1061集成而实现移动终端的输入和输出功能,具体此处不做限定。
接口单元108用作至少一个外部装置与移动终端100连接可以通过的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。接口单元108可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端100内的一个或多个元件或者可以用于在移动终端100和外部装置之间传输数据。
存储器109可用于存储软件程序以及各种数据。存储器109可主要包括存储程序区和存储数据区,可选地,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器109可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
处理器110是移动终端的控制中心,利用各种接口和线路连接整个移动终端的各个部分,通过运行或执行存储在存储器109内的软件程序和/或模块,以及调用存储在存储器109内的数据,执行移动终端的各种功能和处理数据,从而对移动终端进行整体监控。处理器110可包括一个或多个处理单元;优选的,处理器110可集成应用处理器和调制解调处理器,可选地,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器110中。
移动终端100还可以包括给各个部件供电的电源111(比如电池),优选的,电源111可以通过电源管理系统与处理器110逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
尽管图1未示出,移动终端100还可以包括蓝牙模块等,在此不再赘述。
为了便于理解本申请实施例,下面对本申请的移动终端所基于的通信网络系统进行描述。
请参阅图2,图2为本申请实施例提供的一种通信网络系统架构图,该通信网络系统为通用移动通信技术的LTE系统,该LTE系统包括依次通讯连接的UE(User Equipment,用户设备)201, E-UTRAN(Evolved UMTS Terrestrial Radio Access Network,演进式UMTS陆地无线接入网)202,EPC(Evolved Packet Core,演进式分组核心网)203和运营商的IP业务204。
可选地,UE201可以是上述终端100,此处不再赘述。
E-UTRAN202包括eNodeB2021和其它eNodeB2022等。可选地,eNodeB2021可以通过回程(backhaul)(例如X2接口)与其它eNodeB2022连接,eNodeB2021连接到EPC203,eNodeB2021可以提供UE201到EPC203的接入。
EPC203可以包括MME(Mobility Management Entity,移动性管理实体)2031, HSS(Home Subscriber Server,归属用户服务器)2032,其它MME2033, SGW(Serving Gate Way,服务网关)2034, PGW(PDN Gate Way,分组数据网络网关)2035和PCRF(Policy and Charging Rules Function,政策和资费功能实体)2036等。可选地,MME2031是处理UE201和EPC203之间信令的控制节点,提供承载和连接管理。HSS2032用于提供一些寄存器来管理诸如归属位置寄存器(图中未示)之类的功能,并且保存有一些有关服务特征、数据速率等用户专用的信息。所有用户数据都可以通过SGW2034进行发送,PGW2035可以提供UE 201的IP地址分配以及其它功能,PCRF2036是业务数据流和IP承载资源的策略与计费控制策略决策点,它为策略与计费执行功能单元(图中未示)选择及提供可用的策略和计费控制决策。
IP业务204可以包括因特网、内联网、IMS(IP Multimedia Subsystem,IP多媒体子系统)或其它IP业务等。
虽然上述以LTE系统为例进行了介绍,但本领域技术人员应当知晓,本申请不仅仅适用于LTE系统,也可以适用于其他无线通信系统,例如GSM、CDMA2000、WCDMA、TD-SCDMA以及未来新的网络系统等,此处不做限定。
基于上述移动终端硬件结构以及通信网络系统,提出本申请各个实施例。
第一实施例提供了一种数据处理方法,该方法包括以下步骤(S11-S13):
步骤S11,获取所述移动终端的至少一应用程序信息,并根据所述应用程序信息确定至少一目标场景;
本申请所述的数据处理方法应用于移动终端,所述移动终端包括智能手机、智能手表或手环、平板电脑等,以下以应用于智能手机(简称手机)为例进行说明。首先获取用户手机中的至少一个应用程序信息,并根据获取的应用程序信息确定至少一个用户所处的目标场景,可选地,应用程序信息包括用户购物类应用程序中的订单信息,如车票或机票的预定信息、酒店预定信息等,还包括用户当前的位置信息、联系人和通话信息、备忘里的日程和待办信息、各应用程序的使用信息、手机使用时长信息、网络连接情况、IOT数据、与手机通信连接的手环或平板电脑等终端中的数据、音乐和视频等娱乐应用程序的观看历史等。根据获取的应用程序信息,对用户所处的一个或至少一个目标场景进行评估,用户所处的目标场景包括用户当前所处的场景,以及用户未来可能所处的场景。
步骤S12,根据所述目标场景确定目标信息,根据所述目标信息生成或确定显示目标资源。
可选地,通过对用户所处的场景进行评估,进而确定能够符合需求参数的目标信息,根据该目标信息生成或确定要显示的目标资源。可选地,目标资源可以是相集,也可以是由相集或相集中的图片生成的幻灯片或视频等。
可选地,当一个应用程序信息对应至少一个目标场景时,显示目标资源包括分别显示各个目标场景对应的目标资源;
可选地,当至少一个应用程序信息对应同一个目标场景时,显示目标资源包括显示目标场景对应的至少一个目标资源;
可选地,当至少一个应用程序信息对应至少一个目标场景时,显示目标资源包括分别显示各目标场景对应的至少一个目标资源。
可选地,目标资源可以文件夹的形式显示,当显示的目标资源是文件夹时,显示目标资源包括显示一个文件夹,可选地,显示至少一个并列文件夹,可选地,显示母文件夹和子文件夹。以相集为例,若显示的目标资源是根据至少一个目标场景生成的至少一个相集,则可以将生成的相集以文件夹的形式显示,根据同一个目标场景生成的相集作为一个文件夹,同一个文件夹中可以显示至少一个同一目标场景下生成的至少一个不同时间段或不同类型的相集,根据不同场景生成的相集显示于不同的文件夹中。可选地,对相集的文件夹的分类方式,以及对母文件夹和子文件夹的划分方式并不限于此,在此不作具体限定。
可选地,可以符合需求参数的目标信息可以是可变更的配置化信息,当检测到目标信息变更时,根据变更后的目标信息对已经显示的目标资源进行变更;移动终端中的应用程序信息随着用户对移动终端的使用等也会发生变化,当检测到应用程序信息变更时,已经显示的目标资源可变,或不变,当检测到目标资源变更时,可以将变更后新生成的目标资源显示于原目标资源的子文件夹或并列文件夹中。
本申请通过获取移动终端的至少一应用程序信息,并根据所述应用程序信息确定至少一个用户所处的目标场景;根据所述目标场景确定目标信息,根据所述目标信息生成或确定显示目标资源。在生成或确定要显示的目标资源时,结合用户所处的场景生成或显示可以符合需求参数的资源,使显示的资源具有故事性,能够增加与用户之间的情感互动,进而提升用户体验。
可选地,基于上述实施例,提出了本申请数据处理方法的第二实施例,在本实施例中,以相集为显示的目标资源为例进行说明,上述实施例步骤S11中,根据用户所处的目标场景确定能够符合需求参数的目标信息的细化,包括:
步骤a1,根据所述目标场景对需求参数进行预估,确定符合需求参数的目标信息,可选地,所述目标信息包括出行信息、运动健康信息、社交信息和天气信息中的至少一种。
在本实施例中,根据用户所处的目标场景对用户当前的情绪和/或未来可能产生的情绪进行预估,进而确定可以符合需求参数的目标信息。可选地,首先对获取的应用程序信息进行整合和关联分析,以对用户当前和/或未来所处的场景进行识别,以确定用户所处的目标场景,可选地,所述目标场景包括出行动态、运动健康动态、社交动态和天气状况中的至少一种。根据所述目标场景对需求参数进行预估,以确定能够符合需求参数的目标信息,可选地,所述目标信息包括出行信息、运动健康信息、社交信息和天气信息中的至少一种。
用户可能所处的场景包括用户的出行动态、运动健康动态、社交动态和天气状况中的至少一种。上述场景中有用户自身的内在因素决定的用户所处的场景,如用户的出行动态、运动健康动态和社交动态等,也有外在因素的影响从而决定用户可能所处的场景,如天气状况等。在对用户所处的场景进行评估时,首先要对获取的应用程序信息进行整合和关联分析,综合考虑内在因素和外在因素的共同作用,进而对用户当前和/或未来可能所处的场景进行识别,根据识别结果确定能与需求参数的因素。
可选地,在确定符合需求参数的因素时,首先要根据识别出的用户可能所处的场景,对用户可能存在的情绪进行预估,进而根据需求参数确定与需求参数的因素。能够符合需求参数的因素包括用户的出行信息、运动健康信息、社交信息和天气信息中的至少一种。
当通过对获取的应用程序信息进行整合和关联分析,发现用户预定了出行机票,且出行日期是节假日期间,则可以判定用户出行是非公务出行,可能是以游玩或拜访亲友为目的的出行,则可以确定需求参数应该是积极正向的,因此,可以将用户的出行信息作为与用户存在共情的目标信息,并从应用程序信息中对与用户的出行相关的信息进行提取,要根据用户的预定信息确定用户的出行方式、出行目的地和出行日期等,还可以进一步结合天气状况,确定出行当天用户出发地和目的地的天气好坏、是否会对出行造成影响等。
可选地,根据提取的目标信息确定共情模型,并利用共情模型生成符合条件的目标相集,根据不同的目标信息,需要运用不同的共情模型生成包含不同情感色彩的相集,以与用户产生更多的情感互动。在生成相集后,向用户显示生成的相集,向用户显示生成的相集的方式,包括在用户手机相册中显示,当检测到用户在手机相册中点击了相集观看指令,则将生成的相集以视频或幻灯片的方式播放;还可以是在用户手机处于锁屏状态并检测到手机亮屏时,在手机屏幕上以幻灯片的方式进行滚动播放相集中的图片,在此不作具体限定。
参照步骤b1-b2,步骤b1-b2是利用共情模型生成目标资源,即相集的细化步骤:
步骤b1,将所述目标信息转化为资源查询条件;
步骤b2,根据所述资源查询条件在所述移动终端中检索资源信息,并利用检索到的资源信息生成目标资源。
在根据目标信息生成显示的目标资源时,是利用预设的共情模型,将能够与用户产生共情的目标信息转化为资源查询条件,从移动终端中按照资源查询条件检索符合条件的资源信息,并基于检索到的资源信息生成目标资源。
以相集作为生成的目标资源为例进行详细说明,在利用共情模型生成相集时,首先要将提取的目标信息转化为资源查询条件,进而根据资源查询条件在用户的移动终端(或者说是手机)中,检索符合条件的图片,然后利用检索到的图片生成对应的相集。可选地,不同的目标信息对应不同的资源查询条件,根据不同的资源查询条件检索图片并生成相集时,对图片的处理方式也可能存在差异,因此,需要根据不同的目标信息确定不同的共情模型,进而生成对应不同目标信息的相集,与用户产生更多的情感互动。需要说明的是,在对用户手机中的图片进行检索时,检索的图片并不不局限于用户手机相册中存储在本地的图片,也可以是用户手机中的应用程序中的在线图片。
在向用户显示生成或确定要显示的目标资源之后,还包括步骤S13-S14:
步骤S13,获取用户的反馈信息,并根据获取的反馈信息确定用户对所述目标资源的预设等级;
步骤S14,当检测到用户对所述目标资源的预设等级低于或等于预设阈值时,对所述目标信息进行调整,以对所述目标资源进行调整。
参照步骤S13-S14,向用户显示生成的显示资源之后,获取用户的反馈信息,根据用户的反馈信息确定生成的显示资源是否与用户存在共情,进而确定生成的显示资源是否缺少故事,可选地,目标信息是否与用户所处的场景不符合等,最终可以确定是否需要对生成的显示资源进行调整。可选地,获取的用户的反馈信息包括用户在观看显示资源时的操作所触发的指令,如点击相集查看更多、暂停或重播等,还包括用户的观看时长等信息。
以相集为例,在获取用户的反馈信息后,根据用户的反馈信息确定用户对生成的相集的预设等级,当检测到用户对生成的相集的预设等级低于或等于预设阈值时,对于生成的相集,用户的观看时长较短,或点击了“不感兴趣”触发了对相集的屏蔽指令等,则对用户所处的场景重新进行识别,进而对目标信息和生成相集的共情模型进行调整,以对生成的相集进行调整,直到用户对生成的相集的预设等级达到预期为止,若检测到用户在观看相集时,触发了相集分享指令,将生成的相集转发到了社交软件中进行分享,可选地,用户的观看时长较长或观看了完整相集,甚至对相集进行重播等,则可以说明生成的相集与用户产生了共情。
在本实施例中,将能够与需求参数的目标信息作为资源查询条件,检索能够符合需求参数的资源信息,基于检索到的资源信息生成目标资源并向用户显示,能够增加生成的目标资源与用户之间的情感互动,可选地,在向用户显示目标资源之后,获取用户的反馈信息,根据用户的反馈信息确定用户对目标资源的预设等级,当用户对目标资源的预设等级较低时,对目标信息与目标资源进行调整,进而进一步增加目标资源与用户之间的情感互动。
可选地,在上述第一和/或第二实施例的基础上,提出了本申请数据处理方法的第三实施例,本实施例是上述实施例中步骤b1的细化,在本实施例中,生成或确定显示的目标资源以相集为例进行详细说明,步骤b1中,将目标信息转化为资源查询条件的细化包括步骤c1-c4:
步骤c1,当所述目标信息包括出行信息时,利用所述共情模型将所述出行信息中的出行目的地、出行日期和出行交通方式中的一个或多个转化为资源查询条件。
当提取的目标信息中包括出行信息时,该出行信息至少包括出行目的地、出行日期和出行的交通方式,将用户的出行信息中的至少一个转化为资源查询条件。以出行目的地为资源查询条件,在用户手机的图集中,搜索是否存在该地点的图片,以利用生成的图集唤起用户在该地点的回忆,可选地,将搜索到的图片生成对应的图集作为用户的出行指南,为用户的出行做推荐和引导;以出行日期为资源查询条件,在用户手机的图集或社交软件图集中,获取往年同期的图片,以日期为故事线,将与用户在一起的人和/发生的事串联起来,可选地,以日期为故事线,将用户每年同一日期时所在的地点对应的图片进行串联,记录用户在每年的同一日期的出行轨迹。
步骤c2,当所述目标信息包括运动健康信息和天气信息中的至少一个时,利用所述共情模型建立资源信息与气候因素和/或用户情绪之间的关联模型,并将建立的关联模型作为资源查询条件,可选地,资源信息包括图片信息,图片信息包括图片色调和图片感情色彩,气候因素由天气信息确定,包括天气类型、光照强度、湿度、温度、可见度,而用户情绪由运动健康信息确定。
当提取的目标信息包括运动健康信息和/或天气信息时,首先要利用共情模型建立符合与需求参数条件的资源信息与气候因素和/或用户情绪之间的关联模型,进而能够根据用户的运动健康情况和/或气候因素,准确预估需求参数,并将建立的关联模型转化为资源查询条件,可选地,以图片为例,资源信息主要包括图片的色调和感情色彩。可选地,用户的运动健康信息和/或天气状况会影响需求参数,色调和感情色彩饱和的图片可以在一定程度上调节用户情绪,因此,可以根据用户的运动健康信息和/或天气信息对需求参数做预估,并建立对应的关联模型,根据对用户情绪的预估结果,以建立的关联模型作为资源查询条件检索相应的图片生成相集,以对需求参数进行调节。
可选地,气候因素可以通过天气信息确定,主要包括天气类型、光照强度、湿度、温度、可见度等,还包括雾霾或空气质量等,可选地,天气类型包括晴天、阴天、雨雪等,在建立图片信息与气候因素之间的关联模型时,具体是根据天气信息中的气候因素,分析天气对用户情绪的影响,从而建立资源信息与气候因素之间的关联模型。若根据天气信息检测到有日全食、月食、流星、大雪、台风、可选地,暴雨等令人印象深刻的特殊气象,则查询历史中出现相同天气的日期,然后从用户手机图集中搜索该日期下的图片,生成对应的相集作为要显示的目标资源;如果没有检测到特殊气象,则判断天气是否会对用户的出行或行程有影响,可选地,气候是否适宜,以气候因素作为资源查询条件,通过图片色调来拟合用户心情。如有热辣的太阳、气温较高时,容易让人烦躁,则生成相集的图片以背景为冰天雪地或蓝天白云的图片为主;湿冷的天气会让人心生压抑之感,则生成的相集以阳光明媚的户外图片可选地饱和度高的图片为主;可见度较低时,生成相集的图片则以风景类的图片为主;小雨天气时生成相集的图片则以能表达小情绪的图片为主,等等。
而用户情绪也可以通过用户的运动健康信息确定,当通过运动健康信息判断用户情绪,从而建立图片信息与用户情绪之前的关联模型时,具体是根据用户的工作时长、锻炼情况、睡眠时长、心率信息等,综合分析用户的运动健康信息,从而判断用户的疲劳程度或情绪,当检测到用户长时间处于工作状态而十分疲劳,生成相集的图片以包含家人、亲朋好友笑脸的图片和/或宠物的搞笑图片为主;当检测到用户睡眠不佳的情况下,如晨起时间较早或睡眠时间较短时,生成相集的图片以饱和度高、阳光明媚的图片为主,如果是晚睡时间则以低饱和度、舒缓型的图片为主。
由于用户的运动健康信息和天气信息都可能影响需求参数,因此,在实际应用时,提取的目标信息通常同时包含两者,但用户所处的场景不同时,两者的重要程度可能存在差异,当用户正常工作时,需求参数主要与工作时长产生的疲劳程度有关,而当用户即将出行游玩时,天气信息时住到用户情绪的因素,因此,需要根据用户所处的场景,确定两者之间的重要程度,然后生成不同的资源查询条件以检索符合条件的图片。
步骤c3,当所述目标信息包括社交信息时,根据所述社交信息确定用户的社交对象;
步骤c4,利用所述共情模型对所述社交对象进行画像,以从所述社交信息中提取所述社交对象的特征信息,并将提取的特征信息转化为资源查询条件,可选地,所述特征信息包括生日信息、头像信息、互动频率、亲密度中的至少一种。
当目标信息中包括社交信息时,需要根据用户的社交信息确定用户的社交对象,并利用共情模型对用户的社交对象进行画像,进而从社交信息中提取出用户社交对象的特征信息,可选地,用户的社交信息至少包括社交类应用程序中的信息、联系人和通话信息、短信信息等,社交对象的特征信息至少包括社交对象的生日信息、头像信息、与用户之间的互动频率和亲密度等。根据用户于社交对象的互动频率,可以将其社交对象划分为常用联系人和非常用联系人,对于非常用联系人,当检测到非常用联系人与用户进行了互动时,一般是有比较重要的事情,则可以通过分析用户与该联系人的通话记录或短信记录,提取出该联系人突然联系用户的事件,并从用户的社交信息中获取该联系人相关的图片信息,生成对应的相集,唤起用户与该联系人相关的回忆。对于常用联系人,可以根据与用户之间的互动频率或亲密度进行细分,亲密度可以通过提取聊天内容或短信内容信息确定,如果两人经常分享生活日常、生活图片或视频链接,则可以确定两人之间为亲友关系,如果两人经常分享文件,可选地社交信息中经常出现类似于“开会”、“汇报”、“报表”等词汇,则可以确定两人之间为同事关系。对于关系较为亲密的亲友或同事,则可以通过社交软件和联系人信息,获取用户的社交对象的头像和生日时间,当若前时间与某个社交对象的生日相近时,则通过提取的头像信息对该社交对象进行人脸识别或检测,从而从用户手机图集中检索该社交对象的图片,并生成该社交对象的相集,即可用于提醒用户为好友/同事准备生日祝福,同时还可以帮助用户回忆与好友相处的美好时光,进而提醒用户在好友的特殊日子,可以再次创造美好回忆。
可选地,当目标信息包括多个因素时,需要对各个因素进行综合分析,进而确定资源查询条件,可以对目标信息按照重要程度进行排序,以重要程度最高的目标信息作为检索条件,其他目标信息作为对图片进行二次筛选的过滤条件。具体根据用户实际所处的场景可以对目标信息进行不同的组合和/或调整,以得到不同的资源查询条件,进而获取不同的图片并生成对应的相集,上述实施例中的资源查询条件的生成方式仅用于对本申请实施例进行说明,并不用于限定本申请。
本实施通过根据不同的目标信息确定对应的共情模型,并利用共情模型将目标信息转化为不同的资源查询条件,从而能够根据用户不同的情绪获取的不同的资源,使生成的显示资源具有故事性,在增加与用户的情感互动的同时,提高了显示资源生成的灵活性。
本申请还提供一种移动终端,移动终端包括存储器、处理器,存储器上存储有数据处理程序,所述数据处理程序被处理器执行时实现上述任一实施例中的数据处理方法的步骤。
本申请还提供一种计算机可读存储介质,计算机可读存储介质上存储有数据处理程序,所述数据处理程序被处理器执行时实现上述任一实施例中的数据处理方法的步骤。
在本申请提供的移动终端和计算机可读存储介质的实施例中,可以包含任一上述数据处理方法实施例的全部技术特征,说明书拓展和解释内容与上述方法的各实施例基本相同,在此不做再赘述。
本申请实施例还提供一种计算机程序产品,计算机程序产品包括计算机程序代码,当计算机程序代码在计算机上运行时,使得计算机执行如上各种可能的实施方式中的方法。
本申请实施例还提供一种芯片,包括存储器和处理器,存储器用于存储计算机程序,处理器用于从存储器中调用并运行计算机程序,使得安装有芯片的设备执行如上各种可能的实施方式中的方法。
可以理解,上述场景仅是作为示例,并不构成对于本申请实施例提供的技术方案的应用场景的限定,本申请的技术方案还可应用于其他场景。例如,本领域普通技术人员可知,随着系统架构的演变和新业务场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
本申请实施例方法中的步骤可以根据实际需要进行顺序调整、合并和删减。
本申请实施例设备中的单元可以根据实际需要进行合并、划分和删减。
在本申请中,对于相同或相似的术语概念、技术方案和/或应用场景描述,一般只在第一次出现时进行详细描述,后面再重复出现时,为了简洁,一般未再重复阐述,在理解本申请技术方案等内容时,对于在后未详细描述的相同或相似的术语概念、技术方案和/或应用场景描述等,可以参考其之前的相关详细描述。
在本申请中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本申请技术方案的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本申请记载的范围。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,被控终端,或者网络设备等)执行本申请每个实施例的方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络,或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质,(例如,软盘、存储盘、磁带)、光介质(例如,DVD),或者半导体介质(例如固态存储盘Solid State Disk (SSD))等。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (10)

  1. 一种数据处理方法,应用于移动终端,其中,包括以下步骤:
    获取所述移动终端的至少一应用程序信息,并根据所述应用程序信息确定至少一目标场景;
    根据所述目标场景确定目标信息,根据所述目标信息生成或确定显示目标资源。
  2. 如权利要求1所述的方法,其中:
    当一个应用程序信息,对应至少一个目标场景时,所述显示目标资源包括显示各所述目标场景对应的目标资源;
    或,当至少一个应用程序信息,对应同一个目标场景时,所述显示目标资源包括显示所述目标场景对应的目标资源;
    或,当至少一个应用程序信息,对应至少一个目标场景时,所述显示目标资源包括显示至少一个目标资源。
  3. 如权利要求2所述的方法,其中,所述目标资源包括文件夹,所述显示目标资源包括显示一个文件夹,和/或显示至少一个并列文件夹,和/或显示母文件夹和子文件夹。
  4. 如权利要求1至3中任一项所述的方法,其中,所述目标信息为可更改信息,当检测到所述目标信息更改时,根据更改后的目标信息对显示的目标资源进行更改。
  5. 如权利要求1至3中任一项所述的方法,其中,当检测到所述应用程序信息变更时,根据变更后的应用程序信息对显示的目标资源进行变更,或不对显示的目标资源进行变更。
  6. 如权利要求1至3中任一项所述的方法,其中,所述根据所述目标场景确定目标信息的步骤,包括:
    根据所述目标场景对需求参数进行预估,确定与需求参数匹配的目标信息。
  7. 如权利要求1至3中任一项所述的方法,其中,所述根据所述目标信息生成或确定显示目标资源的步骤,包括:
    将所述目标信息转化为资源查询条件;
    根据所述资源查询条件在所述移动终端中检索资源信息,并利用检索到的资源信息生成或确定显示目标资源。
  8. 如权利要求1至3中任一项所述的方法,其中,所述根据所述目标信息生成或确定显示目标资源的步骤之后,包括:
    获取反馈信息,并根据获取的反馈信息确定所述目标资源的预设等级;
    当检测到所述目标资源的预设等级低于或等于预设阈值时,对所述目标信息进行调整,以对所述目标资源进行调整。
  9. 一种移动终端,其中,所述移动终端包括:存储器、处理器,其中,所述存储器上存储有数据处理程序,所述数据处理程序被所述处理器执行时实现如权利要求1至8中任一项所述的数据处理方法的步骤。
  10. 一种可读存储介质,其中,所述可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至8中任一项所述的数据处理方法的步骤。
PCT/CN2021/129675 2021-08-03 2021-11-10 数据处理方法、移动终端及存储介质 WO2023010705A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110887003.1A CN113608808A (zh) 2021-08-03 2021-08-03 数据处理方法、移动终端及存储介质
CN202110887003.1 2021-08-03

Publications (1)

Publication Number Publication Date
WO2023010705A1 true WO2023010705A1 (zh) 2023-02-09

Family

ID=78339319

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/129675 WO2023010705A1 (zh) 2021-08-03 2021-11-10 数据处理方法、移动终端及存储介质

Country Status (2)

Country Link
CN (1) CN113608808A (zh)
WO (1) WO2023010705A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113608808A (zh) * 2021-08-03 2021-11-05 上海传英信息技术有限公司 数据处理方法、移动终端及存储介质
CN114037418A (zh) * 2021-11-08 2022-02-11 深圳传音控股股份有限公司 时钟管理方法、终端设备及存储介质
CN114691278A (zh) * 2022-06-01 2022-07-01 深圳传音控股股份有限公司 应用程序处理方法、智能终端及存储介质
CN118115629A (zh) * 2024-01-31 2024-05-31 北京百度网讯科技有限公司 宠物表情包和宠物模型的生成方法、装置、设备和介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107911445A (zh) * 2017-11-14 2018-04-13 维沃移动通信有限公司 一种消息推送方法、移动终端和存储介质
CN109117233A (zh) * 2018-08-22 2019-01-01 百度在线网络技术(北京)有限公司 用于处理信息的方法和装置
CN111414900A (zh) * 2020-04-30 2020-07-14 Oppo广东移动通信有限公司 场景识别方法、场景识别装置、终端设备及可读存储介质
US20200272518A1 (en) * 2017-12-06 2020-08-27 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method for Resource Allocation and Related Products
CN113608808A (zh) * 2021-08-03 2021-11-05 上海传英信息技术有限公司 数据处理方法、移动终端及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107911445A (zh) * 2017-11-14 2018-04-13 维沃移动通信有限公司 一种消息推送方法、移动终端和存储介质
US20200272518A1 (en) * 2017-12-06 2020-08-27 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method for Resource Allocation and Related Products
CN109117233A (zh) * 2018-08-22 2019-01-01 百度在线网络技术(北京)有限公司 用于处理信息的方法和装置
CN111414900A (zh) * 2020-04-30 2020-07-14 Oppo广东移动通信有限公司 场景识别方法、场景识别装置、终端设备及可读存储介质
CN113608808A (zh) * 2021-08-03 2021-11-05 上海传英信息技术有限公司 数据处理方法、移动终端及存储介质

Also Published As

Publication number Publication date
CN113608808A (zh) 2021-11-05

Similar Documents

Publication Publication Date Title
US11216523B2 (en) Method, system, server and intelligent terminal for aggregating and displaying comments
WO2023010705A1 (zh) 数据处理方法、移动终端及存储介质
CN110830362B (zh) 一种生成内容的方法、移动终端
CN108573064A (zh) 信息推荐方法、移动终端、服务器及计算机可读存储介质
CN108170787A (zh) 一种影像文件删除方法、移动终端以及计算机可读存储介质
CN110180181A (zh) 精彩时刻视频的截图方法、装置及计算机可读存储介质
CN109978610A (zh) 信息处理方法、移动终端及计算机可读存储介质
CN109992183A (zh) 图片预览与选取的方法、终端及存储介质
CN109947523A (zh) 复制粘贴方法、终端及计算机可读存储介质
CN113487705A (zh) 图像标注方法、终端及存储介质
CN112181564A (zh) 生成壁纸的方法、移动终端及存储介质
CN114371803A (zh) 操作方法、智能终端及存储介质
CN108282578A (zh) 拍摄提醒方法、移动终端及计算机可读存储介质
CN108549660B (zh) 信息推送方法及装置
CN114510166B (zh) 操作方法、智能终端及存储介质
CN108319412A (zh) 一种照片删除方法、移动终端和计算机可读存储介质
CN113516986A (zh) 语音处理方法、终端及存储介质
CN110324488A (zh) 一种联系人信息显示方法、终端及计算机可读存储介质
CN115665551A (zh) 处理方法、智能终端及存储介质
CN108419221A (zh) 一种文件传输方法、移动终端及计算机可读存储介质
CN108900696A (zh) 一种数据处理方法、终端和计算机可读存储介质
CN108566476A (zh) 一种信息处理方法、终端和计算机可读存储介质
CN113721997A (zh) 交互处理方法、智能终端及存储介质
CN113901245A (zh) 图片搜索方法、智能终端及存储介质
CN113392318A (zh) 处理方法、终端设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21952568

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21952568

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 21952568

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 07/01/2025)