WO2023010705A1 - 数据处理方法、移动终端及存储介质 - Google Patents
数据处理方法、移动终端及存储介质 Download PDFInfo
- Publication number
- WO2023010705A1 WO2023010705A1 PCT/CN2021/129675 CN2021129675W WO2023010705A1 WO 2023010705 A1 WO2023010705 A1 WO 2023010705A1 CN 2021129675 W CN2021129675 W CN 2021129675W WO 2023010705 A1 WO2023010705 A1 WO 2023010705A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- information
- resource
- user
- mobile terminal
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/16—File or folder operations, e.g. details of user interfaces specifically adapted to file systems
- G06F16/168—Details of user interfaces specifically adapted to file systems, e.g. browsing and visualisation, 2d or 3d GUIs
Definitions
- the present application relates to the technical field of data processing, and in particular to a data processing method, a mobile terminal and a storage medium.
- applications in mobile terminals can generate photo collections based on information such as time, location, or people.
- the photo collection generated by the above technical solution is relatively fixed. It has no emotional color or story line, and/or cannot be combined with the current scene of the user, and lacks emotional interaction with the user, which in turn affects the user experience.
- the present application provides a data processing method, a mobile terminal and a readable storage medium, combining the scene where the user is in to determine the information that can meet the required parameters, and then generate and obtain the target resource that is determined to be displayed to the user, so that the displayed
- the content is story-like and can combine the user's scene and demand parameters.
- the present application provides a data processing method applied to mobile terminals, including:
- Target information is determined according to the target scene, and a display target resource is generated or determined according to the target information.
- the displaying the target resource when one piece of application information corresponds to at least one target scene, includes displaying the target resource corresponding to each of the target scenes;
- the displaying the target resource includes displaying the target resource corresponding to the target scene;
- the displaying the target resource includes displaying at least one target resource.
- the target resource includes a folder
- the displaying the target resource includes displaying a folder, and/or displaying at least one parallel folder, and/or displaying a parent folder and a subfolder.
- the target information is modifiable information, and when it is detected that the target information is changed, the displayed target resource is changed according to the changed target information.
- the displayed target resource is changed according to the changed application program information, or the displayed target resource is not changed.
- the changed target resource is displayed in a subfolder of the target resource, or in a parallel folder.
- the step of determining target information according to the target scene includes:
- the target information includes at least one of travel information, exercise health information, social information, and weather information.
- the step of generating or determining to display target resources according to the target information includes:
- the step of generating or determining to display target resources according to the target information it includes:
- the target information is adjusted to adjust the target resource.
- the present application also provides a mobile terminal, including: a memory and a processor, wherein a data processing program is stored in the memory, and when the data processing program is executed by the processor, the steps of the above data processing method are implemented.
- the present application also provides a computer storage medium, where the computer storage medium stores a computer program, and when the computer program is executed by a processor, the steps of the above-mentioned data processing method are realized.
- this application discloses a data processing method, a mobile terminal, and a readable storage medium.
- the data processing method of this application is applied to a mobile terminal.
- This application obtains at least one application program information of the mobile terminal, and Determine at least one target scene according to the application program information; determine target information according to the target scene, and generate or determine a display target resource according to the target information.
- the target resource is generated and displayed in combination with the target scene where the user is located, and the emotional interaction between the displayed content and the user is increased, thereby improving the user experience.
- FIG. 1 is a schematic diagram of a hardware structure of a mobile terminal implementing various embodiments of the present application
- FIG. 2 is a system architecture diagram of a communication network provided by an embodiment of the present application.
- Fig. 3 is a schematic flowchart of a data processing method according to the first embodiment.
- first, second, third, etc. may be used herein to describe various information, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from one another. Without departing from the scope of this document, first information may also be called second information, and similarly, second information may also be called first information.
- first information may also be called second information, and similarly, second information may also be called first information.
- word “if” as used herein may be interpreted as “at” or “when” or “in response to a determination”.
- the singular forms "a”, “an” and “the” are intended to include the plural forms as well, unless the context indicates otherwise.
- A, B, C means “any of the following: A; B; C; A and B; A and C; B and C; A and B and C", another example, "A, B or C” or "A, B and/or C” means "any of the following: A; B; C; A and B; A and C; B and C; A and B and C". Exceptions to this definition will only arise when combinations of elements, functions, steps or operations are inherently mutually exclusive in some way.
- the words “if”, “if” as used herein may be interpreted as “at” or “when” or “in response to determining” or “in response to detecting”.
- the phrases “if determined” or “if detected (the stated condition or event)” could be interpreted as “when determined” or “in response to the determination” or “when detected (the stated condition or event) )” or “in response to detection of (stated condition or event)”.
- step codes such as S11 and S12 are used, the purpose of which is to express the corresponding content more clearly and concisely, and does not constitute a substantive limitation on the order.
- S12 will be executed first and then S11, etc., but these should be within the scope of protection of this application.
- Mobile terminals may be implemented in various forms.
- the mobile terminals described in this application may include mobile phones, tablet computers, notebook computers, palmtop computers, personal digital assistants (Personal Digital Assistant, PDA), portable media players (Portable Media Player, PMP), navigation devices, wearable devices, smart bracelets, pedometers and other mobile terminals, as well as fixed terminals such as digital TVs and desktop computers.
- PDA Personal Digital Assistant
- PMP portable media players
- navigation devices wearable devices, smart bracelets, pedometers and other mobile terminals
- wearable devices wearable devices
- smart bracelets smart bracelets
- pedometers pedometers and other mobile terminals
- fixed terminals such as digital TVs and desktop computers.
- a mobile terminal will be taken as an example, and those skilled in the art will understand that, in addition to elements specially used for mobile purposes, the configurations according to the embodiments of the present application can also be applied to fixed-type terminals.
- FIG. 1 is a schematic diagram of a hardware structure of a mobile terminal implementing various embodiments of the present application.
- the mobile terminal 100 may include: an RF (Radio Frequency, radio frequency) unit 101, a WiFi module 102, an audio output unit 103, a /V (audio/video) input unit 104 , sensor 105 , display unit 106 , user input unit 107 , interface unit 108 , memory 109 , processor 110 , and power supply 111 and other components.
- RF Radio Frequency, radio frequency
- the radio frequency unit 101 can be used for sending and receiving information or receiving and sending signals during a call. Specifically, after receiving the downlink information of the base station, it is processed by the processor 110; in addition, the uplink data is sent to the base station.
- the radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
- the radio frequency unit 101 can also communicate with the network and other devices through wireless communication.
- the above wireless communication can use any communication standard or protocol, including but not limited to GSM (Global System of Mobile communication, Global System for Mobile Communications), GPRS (General Packet Radio Service, general packet radio service), CDMA2000 (Code Division Multiple Access 2000, Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access, Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access, Time Division Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division Duplexing- Long Term Evolution, frequency division duplex long-term evolution) and TDD-LTE (Time Division Duplexing-Long Term Evolution, Time Division Duplexing Long Term Evolution) and so on.
- GSM Global System of Mobile communication, Global System for Mobile Communications
- GPRS General Packet Radio Service, general packet radio service
- CDMA2000 Code Division Multiple Access 2000, Code Division Multiple Access 2000
- WCDMA Wideband Code Division Multiple Access
- TD-SCDMA Time Division-Synchronous Code Division Multiple Access,
- WiFi is a short-distance wireless transmission technology.
- the mobile terminal can help users send and receive emails, browse web pages, and access streaming media through the WiFi module 102, which provides users with wireless broadband Internet access.
- Fig. 1 shows the WiFi module 102, it can be understood that it is not an essential component of the mobile terminal, and can be completely omitted as required without changing the essence of the invention.
- the audio output unit 103 can store the audio received by the radio frequency unit 101 or the WiFi module 102 or in the memory 109 when the mobile terminal 100 is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, or the like.
- the audio data is converted into an audio signal and output as sound.
- the audio output unit 103 can also provide audio output (call signal reception sound, message reception sound, etc.) related to a specific function performed by the mobile terminal 100 .
- the audio output unit 103 may include a speaker, a buzzer, and the like.
- the A/V input unit 104 is used to receive audio or video signals.
- A/V input unit 104 may include a graphics processor (Graphics Processing Unit (GPU) 1041 and a microphone 1042, the graphics processor 1041 processes image data of still pictures or videos obtained by an image capture device (such as a camera) in video capture mode or image capture mode. The processed image frames may be displayed on the display unit 106 .
- the image frames processed by the graphics processor 1041 may be stored in the memory 109 (or other storage media) or sent via the radio frequency unit 101 or the WiFi module 102 .
- the microphone 1042 may receive sound (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, and the like operating modes, and can process such sound as audio data.
- the processed audio (voice) data can be converted into a format that can be sent to a mobile communication base station via the radio frequency unit 101 for output in the case of a phone call mode.
- the microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the process of receiving and transmitting audio signals.
- the mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors.
- the light sensor includes an ambient light sensor and a proximity sensor.
- the ambient light sensor can adjust the brightness of the display panel 1061 according to the brightness of the ambient light, and the proximity sensor can turn off the display when the mobile terminal 100 moves to the ear. panel 1061 and/or backlight.
- the accelerometer sensor can detect the magnitude of acceleration in various directions (generally three axes), and can detect the magnitude and direction of gravity when it is stationary, and can be used for applications that recognize the posture of mobile phones (such as horizontal and vertical screen switching, related Games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tap), etc.; as for mobile phones, fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, Other sensors such as thermometers and infrared sensors will not be described in detail here.
- the display unit 106 is used to display information input by the user or information provided to the user.
- the display unit 106 may include a display panel 1061 , and the display panel 1061 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an organic light-emitting diode (Organic Light-Emitting Diode, OLED), or the like.
- LCD Liquid Crystal Display
- OLED Organic Light-Emitting Diode
- the user input unit 107 can be used to receive input numbers or character information, and generate key signal input related to user settings and function control of the mobile terminal.
- the user input unit 107 may include a touch panel 1071 and other input devices 1072 .
- the touch panel 1071 also referred to as a touch screen, can collect touch operations of the user on or near it (for example, the user uses any suitable object or accessory such as a finger or a stylus on the touch panel 1071 or near the touch panel 1071). operation), and drive the corresponding connection device according to the preset program.
- the touch panel 1071 may include two parts, a touch detection device and a touch controller.
- the touch detection device detects the user's touch orientation, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into contact coordinates , and then sent to the processor 110, and can receive the command sent by the processor 110 and execute it.
- the touch panel 1071 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave.
- the user input unit 107 may also include other input devices 1072 .
- other input devices 1072 may include, but are not limited to, one or more of physical keyboards, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, etc., which are not specifically described here. limited.
- the touch panel 1071 may cover the display panel 1061.
- the touch panel 1071 detects a touch operation on or near it, it transmits to the processor 110 to determine the type of the touch event, and then the processor 110 determines the touch event according to the touch event.
- the corresponding visual output is provided on the display panel 1061 .
- the touch panel 1071 and the display panel 1061 are used as two independent components to realize the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 can be integrated.
- the implementation of the input and output functions of the mobile terminal is not specifically limited here.
- the interface unit 108 serves as an interface through which at least one external device can be connected with the mobile terminal 100 .
- an external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, audio input/output (I/O) ports, video I/O ports, headphone ports, and more.
- the interface unit 108 can be used to receive input from an external device (for example, data information, power, etc.) transfer data between devices.
- the memory 109 can be used to store software programs as well as various data.
- the memory 109 can mainly include a program storage area and a data storage area.
- the program storage area can store an operating system, at least one application program required by a function (such as a sound playback function, an image playback function, etc.) and the like;
- the storage data area can be Stores data (such as audio data, phonebook, etc.) created according to the use of the mobile phone, etc.
- the memory 109 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage devices.
- the processor 110 is the control center of the mobile terminal, and uses various interfaces and lines to connect various parts of the entire mobile terminal, by running or executing software programs and/or modules stored in the memory 109, and calling data stored in the memory 109 , execute various functions of the mobile terminal and process data, so as to monitor the mobile terminal as a whole.
- the processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor and a modem processor.
- the application processor mainly processes operating systems, user interfaces, and application programs, etc.
- the demodulation processor mainly handles wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 110 .
- the mobile terminal 100 may also include a power source 111 (such as a battery) for supplying power to various components.
- a power source 111 such as a battery
- the power source 111 may be logically connected to the processor 110 through a power management system, so as to manage charging, discharging, and power consumption through the power management system. and other functions.
- the mobile terminal 100 may also include a Bluetooth module, etc., which will not be repeated here.
- the following describes the communication network system on which the mobile terminal of the present application is based.
- FIG. 2 is a structure diagram of a communication network system provided by an embodiment of the present application.
- the communication network system is an LTE system of general mobile communication technology, and the LTE system includes UEs (User Equipment, User Equipment, ) 201, E-UTRAN (Evolved UMTS Terrestrial Radio Access Network, Evolved UMTS Terrestrial Radio Access Network) 202, EPC (Evolved Packet Core, Evolved Packet Core Network) 203 and the operator's IP service 204.
- UEs User Equipment, User Equipment,
- E-UTRAN Evolved UMTS Terrestrial Radio Access Network
- EPC Evolved Packet Core, Evolved Packet Core Network
- the UE 201 may be the above-mentioned terminal 100, which will not be repeated here.
- E-UTRAN 202 includes eNodeB 2021 and other eNodeB 2022 and so on.
- the eNodeB 2021 can be connected to other eNodeB 2022 through a backhaul (for example, X2 interface), the eNodeB 2021 is connected to the EPC 203 , and the eNodeB 2021 can provide access from the UE 201 to the EPC 203 .
- a backhaul for example, X2 interface
- EPC203 may include MME (Mobility Management Entity, mobility management entity) 2031, HSS (Home Subscriber Server, home user server) 2032, other MME2033, SGW (Serving Gate Way, serving gateway) 2034, PGW (PDN Gate Way, packet data network gateway) 2035 and PCRF ( Policy and Charging Rules Function, policy and tariff function entity) 2036, etc.
- MME2031 is a control node that processes signaling between UE201 and EPC203, and provides bearer and connection management.
- HSS2032 is used to provide some registers to manage functions such as the home location register (not shown in the figure), and save some user-specific information about service characteristics and data rates.
- PCRF2036 is the policy and charging control policy decision point of service data flow and IP bearer resources, it is the policy and charging execution function A unit (not shown) selects and provides available policy and charging control decisions.
- the IP service 204 may include Internet, Intranet, IMS (IP Multimedia Subsystem, IP Multimedia Subsystem) or other IP services.
- IMS IP Multimedia Subsystem, IP Multimedia Subsystem
- LTE system is used as an example above, those skilled in the art should know that this application is not only applicable to the LTE system, but also applicable to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA and future new wireless communication systems.
- the network system, etc. are not limited here.
- the first embodiment provides a data processing method, which includes the following steps (S11-S13):
- Step S11 acquiring at least one application program information of the mobile terminal, and determining at least one target scene according to the application program information;
- the data processing method described in this application is applied to a mobile terminal, and the mobile terminal includes a smart phone, a smart watch or bracelet, a tablet computer, etc., and the application to a smart phone (referred to as a mobile phone) is used as an example for illustration below.
- the application information includes order information in the user's shopping application, such as a ticket or Ticket reservation information, hotel reservation information, etc., also includes the user's current location information, contact and call information, schedule and to-do information in the memo, usage information of various applications, mobile phone usage time information, network connection status, IOT data, data in terminals such as bracelets or tablets connected to mobile phones, viewing history of entertainment applications such as music and videos, etc.
- order information in the user's shopping application such as a ticket or Ticket reservation information, hotel reservation information, etc.
- the user's current location information such as a ticket or Ticket reservation information, hotel reservation information, etc.
- the user's current location information such as a ticket or Ticket reservation information, hotel reservation information, etc.
- the user's current location information such as a ticket or Ticket reservation information, hotel reservation information, etc.
- the target scene where the user is located includes a current scene where the user is located and a scene where the user may be located in the future.
- Step S12 determining target information according to the target scene, and generating or determining a display target resource according to the target information.
- target information that can meet the requirement parameters is determined by evaluating the scene where the user is in, and the target resource to be displayed is generated or determined according to the target information.
- the target resource may be a photo album, or a slideshow or video generated from a photo album or pictures in the photo album.
- displaying the target resource includes separately displaying target resources corresponding to each target scene;
- displaying the target resource includes displaying at least one target resource corresponding to the target scene;
- displaying the target resource includes respectively displaying at least one target resource corresponding to each target scene.
- the target resource can be displayed in the form of a folder.
- displaying the target resource includes displaying a folder, optionally, displaying at least one parallel folder, and optionally displaying a parent file folders and subfolders.
- the generated photo collection can be displayed in the form of a folder, and the photo collection generated according to the same target scene can be used as a file Folders, at least one photo collection of different time periods or different types generated under the same target scene can be displayed in the same folder, and photo collections generated according to different scenes are displayed in different folders.
- the way of classifying the folders of the album and the way of dividing parent folders and subfolders is not limited thereto, and no specific limitation is made here.
- the target information that can meet the required parameters can be changeable configuration information.
- the displayed target resource is changed according to the changed target information; the application program information in the mobile terminal With the user's use of the mobile terminal, etc. will also change.
- the target resource that has been displayed is variable or unchanged.
- the newly generated The target resource appears in a subfolder or parallel folder of the original target resource.
- This application obtains at least one application program information of the mobile terminal, and determines the target scene where at least one user is in according to the application program information; determines the target information according to the target scene, and generates or determines the display target resource according to the target information .
- generating or determining the target resources to be displayed combined with the user's scene to generate or display resources that can meet the required parameters, so that the displayed resources have a story, which can increase the emotional interaction with the user, thereby improving the user experience.
- a second embodiment of the data processing method of the present application is proposed.
- the photo collection is used as an example for illustration.
- step S11 of the above-mentioned embodiment according to the user
- the target scenario determines the refinement of the target information that can meet the required parameters, including:
- Step a1 Estimate demand parameters according to the target scene, and determine target information that meets the demand parameters.
- the target information includes at least one of travel information, exercise health information, social information, and weather information.
- the user's current emotion and/or possible future emotion are estimated according to the target scene where the user is located, and then the target information that can meet the demand parameters is determined.
- the acquired application information is firstly integrated and correlated to identify the current and/or future scenes of the user, so as to determine the target scene of the user.
- the target scene includes At least one of travel dynamics, sports health dynamics, social dynamics and weather conditions. Estimate demand parameters according to the target scene to determine target information that can meet the demand parameters.
- the target information includes at least one of travel information, exercise health information, social information and weather information.
- the scenarios where the user may be located include at least one of the user's travel dynamics, sports and health dynamics, social dynamics and weather conditions.
- the user's own scene is determined by the user's own internal factors, such as the user's travel dynamics, sports and health dynamics, and social dynamics, etc., as well as external factors that determine the user's possible scenario, such as weather conditions, etc. .
- external factors that determine the user's possible scenario, such as weather conditions, etc.
- the factors that meet the demand parameters it is first necessary to estimate the possible emotions of the user according to the recognized scene where the user may be, and then determine the factors that meet the demand parameters according to the demand parameters.
- the factors that can meet the demand parameters include at least one of the user's travel information, exercise health information, social information and weather information.
- the user's travel information can be used as the target information that empathizes with the user, and the information related to the user's travel can be extracted from the application information.
- the reservation information determines the user's travel mode, travel destination and travel date, etc., and can also be further combined with weather conditions to determine whether the weather at the user's departure and destination on the day of travel is good or bad, and whether it will affect travel.
- different empathy models need to be used to generate photo sets containing different emotional colors, so as to Generate more emotional interaction with users.
- the generated photo collection is displayed to the user, and the way of displaying the generated photo collection to the user includes displaying in the user's mobile phone photo album.
- a The photo collection of the user can be played in the form of video or slideshow; when the user’s mobile phone is locked and the screen is detected to be turned on, the pictures in the collection can be scrolled on the screen of the mobile phone in the form of a slideshow, which will not be described here. Specific limits.
- steps b1-b2 are the refinement steps of using the empathy model to generate target resources, that is, photo sets:
- Step b1 converting the target information into resource query conditions
- Step b2 retrieving resource information in the mobile terminal according to the resource query condition, and using the retrieved resource information to generate a target resource.
- the preset empathy model is used to convert the target information that can empathize with the user into resource query conditions, and retrieve qualified resource information from the mobile terminal according to the resource query conditions , and generate the target resource based on the retrieved resource information.
- the extracted target information when using the empathy model to generate a photo collection, the extracted target information must first be converted into resource query conditions, and then according to the resource query conditions, the user's mobile terminal (or is a mobile phone), search for pictures that meet the conditions, and then use the retrieved pictures to generate a corresponding photo album.
- different target information corresponds to different resource query conditions.
- the processing methods for pictures may also be different. Therefore, different target information needs to be determined according to different target information.
- Empathy model and then generate photo sets corresponding to different target information, and generate more emotional interaction with users. It should be noted that, when retrieving pictures in the user's mobile phone, the retrieved pictures are not limited to pictures stored locally in the photo album of the user's mobile phone, and may also be online pictures in applications in the user's mobile phone.
- steps S13-S14 are also included:
- Step S13 obtaining user feedback information, and determining the user's preset level for the target resource according to the obtained feedback information
- Step S14 when it is detected that the user's preset level of the target resource is lower than or equal to a preset threshold, adjust the target information so as to adjust the target resource.
- the acquired user feedback information includes instructions triggered by the user's operations when watching the display resource, such as clicking on the album to view more, pause or replay, etc., and also includes information such as the user's viewing time.
- the photo collection after obtaining the user's feedback information, determine the user's preset level for the generated photo collection according to the user's feedback information.
- the user When it is detected that the user's preset level for the generated photo collection is lower than or equal to the preset
- the threshold for the generated photo collection, the user’s viewing time is relatively short, or clicking “not interested” triggers the blocking command for the photo collection, etc., the scene where the user is in is re-identified, and then the target information and generated
- the empathy model of the photo collection is adjusted to adjust the generated photo collection until the user's preset level of the generated photo collection meets expectations.
- the photo collection sharing command is triggered.
- the generated photo collection is forwarded to social software for sharing.
- the user watches for a long time or watches the complete photo collection, or even replays the photo collection it can be explained that the generated photo collection has shared with the user. Affection.
- the target information that can meet the demand parameters is used as the resource query condition, and the resource information that can meet the demand parameters is retrieved, and the target resources are generated based on the retrieved resource information and displayed to the user.
- the user’s feedback information is obtained, and the user’s preset level of the target resource is determined according to the user’s feedback information.
- the user’s preset level of the target resource is low , adjust the target information and target resources to further increase the emotional interaction between target resources and users.
- a third embodiment of the data processing method of the present application is proposed.
- This embodiment is a refinement of step b1 in the above-mentioned embodiments.
- the generation or determination of the displayed target resources will be described in detail by taking the photo collection as an example.
- the refinement of converting target information into resource query conditions includes steps c1-c4:
- Step c1 when the target information includes travel information, use the empathy model to convert one or more of the travel destination, travel date and travel mode in the travel information into resource query conditions.
- the travel information at least includes travel destination, travel date and travel mode, and at least one of the user's travel information is converted into a resource query condition.
- generate a corresponding The atlas serves as the user's travel guide, recommending and guiding the user's travel; using the travel date as the resource query condition, in the atlas of the user's mobile phone or the atlas of social software, obtain the pictures of the same period in previous years, and use the date as the story line.
- the people and/events that are with the user are connected in series.
- the date is used as the story line, and the pictures corresponding to the place where the user is located on the same date every year are connected in series, and the travel trajectory of the user on the same date every year is recorded.
- Step c2 when the target information includes at least one of sports health information and weather information, use the empathy model to establish an association model between resource information and climate factors and/or user emotions, and use the established association model
- resource information includes picture information
- picture information includes picture tone and picture emotional color
- climate factors are determined by weather information, including weather type, light intensity, humidity, temperature, visibility, and user emotions are determined by motion Health information OK.
- the empathy model must first be used to establish an association model between resource information that meets the demand parameter conditions and climate factors and/or user emotions, and then it can be based on the user's Exercise health conditions and/or climate factors, accurately estimate demand parameters, and convert the established association model into resource query conditions.
- the resource information mainly includes the color tone and emotional color of the picture.
- the user's exercise health information and/or weather conditions will affect the demand parameters, and pictures with saturated colors and emotional colors can adjust the user's emotions to a certain extent. Therefore, according to the user's exercise health information and/or weather information
- the demand parameters are estimated, and the corresponding association model is established. According to the estimation results of user emotions, the established association model is used as the resource query condition to retrieve the corresponding pictures and generate a photo album to adjust the demand parameters.
- climate factors can be determined through weather information, mainly including weather type, light intensity, humidity, temperature, visibility, etc., and also include smog or air quality, etc.
- weather types include sunny, cloudy, rainy and snowy etc.
- the pictures in the generated photo collection are mainly pictures with the background of ice and snow or blue sky and white clouds;
- the collection is mainly based on sunny outdoor pictures and optionally high-saturation pictures; when the visibility is low, the pictures that generate the photo collection are mainly landscape pictures; Emotional pictures based, and so on.
- the user's mood can also be determined through the user's sports health information.
- sports health information When judging the user's mood through the sports health information to establish a correlation model between the picture information and the user's mood, it is specifically based on the user's working hours, exercise conditions, sleep duration, and heart rate.
- Information comprehensively analyze the user's sports and health information, so as to judge the user's fatigue level or mood, when it is detected that the user is very tired after working for a long time, generate a picture of the photo album to include pictures of family members, relatives and friends smiling faces and/or Funny pictures of pets are the main ones; when it is detected that the user is not sleeping well, such as getting up early in the morning or sleeping for a short time, the pictures in the generated album are mainly pictures with high saturation and bright sunshine. Sleeping time at night is dominated by low-saturation, soothing pictures.
- the extracted target information usually includes both, but when the user's scene is different, the importance of the two may vary.
- the demand parameter is mainly related to the fatigue degree caused by the working hours, and when the user is about to travel and play, the weather information can be a factor of the user's emotion. Therefore, it is necessary to determine the difference between the two according to the user's scene. and then generate different resource query conditions to retrieve qualified images.
- Step c3 when the target information includes social information, determine the user's social objects according to the social information;
- Step c4 use the empathy model to make a portrait of the social object, so as to extract the characteristic information of the social object from the social information, and convert the extracted characteristic information into a resource query condition.
- the The feature information includes at least one of birthday information, avatar information, interaction frequency, and intimacy.
- the target information includes social information
- the user's social information includes at least information in social applications, contacts and call information, SMS information, etc.
- the characteristic information of social objects includes at least the birthday information of social objects, avatar information, interaction frequency with users, and intimacy etc.
- their social objects can be divided into frequent contacts and infrequent contacts. For infrequent contacts, when it is detected that infrequent contacts have interacted with users, it is generally more important.
- the intimacy can be determined by extracting chat content or SMS content information. If two people often share daily life, life pictures or video links, you can It is determined that the relationship between the two is relatives and friends. If the two often share files, and optional words such as "meeting", "report”, and "report” often appear in social information, it can be determined that the relationship between the two is a colleague .
- the target information when the target information includes multiple factors, it is necessary to conduct a comprehensive analysis of each factor to determine the resource query conditions.
- the target information can be sorted according to the degree of importance, and the target information with the highest degree of importance is used as the retrieval condition.
- the information is used as a filter condition for secondary screening of pictures.
- the target information can be combined and/or adjusted differently according to the actual scene of the user to obtain different resource query conditions, and then obtain different pictures and generate corresponding photo sets.
- the resource query conditions in the above-mentioned embodiments The generation method is only used to illustrate the embodiment of the present application, and is not used to limit the present application.
- This implementation determines the corresponding empathy model according to different target information, and uses the empathy model to convert the target information into different resource query conditions, so that different resources can be obtained according to different emotions of the user, so that the generated display resources have Storytelling, while increasing the emotional interaction with users, it also improves the flexibility of display resource generation.
- the present application also provides a mobile terminal.
- the mobile terminal includes a memory and a processor, and a data processing program is stored in the memory.
- the data processing program is executed by the processor, the steps of the data processing method in any of the foregoing embodiments are implemented.
- the present application also provides a computer-readable storage medium, on which a data processing program is stored, and when the data processing program is executed by a processor, the steps of the data processing method in any of the foregoing embodiments are implemented.
- the embodiments of the mobile terminal and the computer-readable storage medium provided in this application may contain all the technical features of any of the above-mentioned data processing method embodiments. Do not repeat them.
- An embodiment of the present application further provides a computer program product, the computer program product includes computer program code, and when the computer program code is run on the computer, the computer is made to execute the methods in the above various possible implementation manners.
- the embodiment of the present application also provides a chip, including a memory and a processor.
- the memory is used to store a computer program
- the processor is used to call and run the computer program from the memory, so that the device installed with the chip executes the above various possible implementation modes. Methods.
- Units in the device in the embodiment of the present application may be combined, divided and deleted according to actual needs.
- the methods of the above embodiments can be implemented by means of software plus a necessary general-purpose hardware platform, and of course also by hardware, but in many cases the former is better implementation.
- the technical solution of the present application can be embodied in the form of a software product in essence or in other words, the part that contributes to the prior art, and the computer software product is stored in one of the above storage media (such as ROM/RAM, magnetic CD, CD), including several instructions to make a terminal device (which may be a mobile phone, computer, server, controlled terminal, or network device, etc.) execute the method of each embodiment of the present application.
- a computer program product includes one or more computer instructions.
- a computer can be a general purpose computer, special purpose computer, a computer network, or other programmable apparatus.
- Computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, e.g.
- the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server, a data center, etc. integrated with one or more available media.
- Usable media can be magnetic media, (for example, floppy disks, memory disks, magnetic tape), optical media (for example, DVD), or semiconductor media (for example, solid State Disk (SSD)), etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Telephone Function (AREA)
Abstract
Description
Claims (10)
- 一种数据处理方法,应用于移动终端,其中,包括以下步骤:获取所述移动终端的至少一应用程序信息,并根据所述应用程序信息确定至少一目标场景;根据所述目标场景确定目标信息,根据所述目标信息生成或确定显示目标资源。
- 如权利要求1所述的方法,其中:当一个应用程序信息,对应至少一个目标场景时,所述显示目标资源包括显示各所述目标场景对应的目标资源;或,当至少一个应用程序信息,对应同一个目标场景时,所述显示目标资源包括显示所述目标场景对应的目标资源;或,当至少一个应用程序信息,对应至少一个目标场景时,所述显示目标资源包括显示至少一个目标资源。
- 如权利要求2所述的方法,其中,所述目标资源包括文件夹,所述显示目标资源包括显示一个文件夹,和/或显示至少一个并列文件夹,和/或显示母文件夹和子文件夹。
- 如权利要求1至3中任一项所述的方法,其中,所述目标信息为可更改信息,当检测到所述目标信息更改时,根据更改后的目标信息对显示的目标资源进行更改。
- 如权利要求1至3中任一项所述的方法,其中,当检测到所述应用程序信息变更时,根据变更后的应用程序信息对显示的目标资源进行变更,或不对显示的目标资源进行变更。
- 如权利要求1至3中任一项所述的方法,其中,所述根据所述目标场景确定目标信息的步骤,包括:根据所述目标场景对需求参数进行预估,确定与需求参数匹配的目标信息。
- 如权利要求1至3中任一项所述的方法,其中,所述根据所述目标信息生成或确定显示目标资源的步骤,包括:将所述目标信息转化为资源查询条件;根据所述资源查询条件在所述移动终端中检索资源信息,并利用检索到的资源信息生成或确定显示目标资源。
- 如权利要求1至3中任一项所述的方法,其中,所述根据所述目标信息生成或确定显示目标资源的步骤之后,包括:获取反馈信息,并根据获取的反馈信息确定所述目标资源的预设等级;当检测到所述目标资源的预设等级低于或等于预设阈值时,对所述目标信息进行调整,以对所述目标资源进行调整。
- 一种移动终端,其中,所述移动终端包括:存储器、处理器,其中,所述存储器上存储有数据处理程序,所述数据处理程序被所述处理器执行时实现如权利要求1至8中任一项所述的数据处理方法的步骤。
- 一种可读存储介质,其中,所述可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至8中任一项所述的数据处理方法的步骤。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110887003.1A CN113608808A (zh) | 2021-08-03 | 2021-08-03 | 数据处理方法、移动终端及存储介质 |
CN202110887003.1 | 2021-08-03 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023010705A1 true WO2023010705A1 (zh) | 2023-02-09 |
Family
ID=78339319
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/129675 WO2023010705A1 (zh) | 2021-08-03 | 2021-11-10 | 数据处理方法、移动终端及存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113608808A (zh) |
WO (1) | WO2023010705A1 (zh) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113608808A (zh) * | 2021-08-03 | 2021-11-05 | 上海传英信息技术有限公司 | 数据处理方法、移动终端及存储介质 |
CN114037418A (zh) * | 2021-11-08 | 2022-02-11 | 深圳传音控股股份有限公司 | 时钟管理方法、终端设备及存储介质 |
CN114691278A (zh) * | 2022-06-01 | 2022-07-01 | 深圳传音控股股份有限公司 | 应用程序处理方法、智能终端及存储介质 |
CN118115629A (zh) * | 2024-01-31 | 2024-05-31 | 北京百度网讯科技有限公司 | 宠物表情包和宠物模型的生成方法、装置、设备和介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107911445A (zh) * | 2017-11-14 | 2018-04-13 | 维沃移动通信有限公司 | 一种消息推送方法、移动终端和存储介质 |
CN109117233A (zh) * | 2018-08-22 | 2019-01-01 | 百度在线网络技术(北京)有限公司 | 用于处理信息的方法和装置 |
CN111414900A (zh) * | 2020-04-30 | 2020-07-14 | Oppo广东移动通信有限公司 | 场景识别方法、场景识别装置、终端设备及可读存储介质 |
US20200272518A1 (en) * | 2017-12-06 | 2020-08-27 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method for Resource Allocation and Related Products |
CN113608808A (zh) * | 2021-08-03 | 2021-11-05 | 上海传英信息技术有限公司 | 数据处理方法、移动终端及存储介质 |
-
2021
- 2021-08-03 CN CN202110887003.1A patent/CN113608808A/zh active Pending
- 2021-11-10 WO PCT/CN2021/129675 patent/WO2023010705A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107911445A (zh) * | 2017-11-14 | 2018-04-13 | 维沃移动通信有限公司 | 一种消息推送方法、移动终端和存储介质 |
US20200272518A1 (en) * | 2017-12-06 | 2020-08-27 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method for Resource Allocation and Related Products |
CN109117233A (zh) * | 2018-08-22 | 2019-01-01 | 百度在线网络技术(北京)有限公司 | 用于处理信息的方法和装置 |
CN111414900A (zh) * | 2020-04-30 | 2020-07-14 | Oppo广东移动通信有限公司 | 场景识别方法、场景识别装置、终端设备及可读存储介质 |
CN113608808A (zh) * | 2021-08-03 | 2021-11-05 | 上海传英信息技术有限公司 | 数据处理方法、移动终端及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN113608808A (zh) | 2021-11-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11216523B2 (en) | Method, system, server and intelligent terminal for aggregating and displaying comments | |
WO2023010705A1 (zh) | 数据处理方法、移动终端及存储介质 | |
CN110830362B (zh) | 一种生成内容的方法、移动终端 | |
CN108573064A (zh) | 信息推荐方法、移动终端、服务器及计算机可读存储介质 | |
CN108170787A (zh) | 一种影像文件删除方法、移动终端以及计算机可读存储介质 | |
CN110180181A (zh) | 精彩时刻视频的截图方法、装置及计算机可读存储介质 | |
CN109978610A (zh) | 信息处理方法、移动终端及计算机可读存储介质 | |
CN109992183A (zh) | 图片预览与选取的方法、终端及存储介质 | |
CN109947523A (zh) | 复制粘贴方法、终端及计算机可读存储介质 | |
CN113487705A (zh) | 图像标注方法、终端及存储介质 | |
CN112181564A (zh) | 生成壁纸的方法、移动终端及存储介质 | |
CN114371803A (zh) | 操作方法、智能终端及存储介质 | |
CN108282578A (zh) | 拍摄提醒方法、移动终端及计算机可读存储介质 | |
CN108549660B (zh) | 信息推送方法及装置 | |
CN114510166B (zh) | 操作方法、智能终端及存储介质 | |
CN108319412A (zh) | 一种照片删除方法、移动终端和计算机可读存储介质 | |
CN113516986A (zh) | 语音处理方法、终端及存储介质 | |
CN110324488A (zh) | 一种联系人信息显示方法、终端及计算机可读存储介质 | |
CN115665551A (zh) | 处理方法、智能终端及存储介质 | |
CN108419221A (zh) | 一种文件传输方法、移动终端及计算机可读存储介质 | |
CN108900696A (zh) | 一种数据处理方法、终端和计算机可读存储介质 | |
CN108566476A (zh) | 一种信息处理方法、终端和计算机可读存储介质 | |
CN113721997A (zh) | 交互处理方法、智能终端及存储介质 | |
CN113901245A (zh) | 图片搜索方法、智能终端及存储介质 | |
CN113392318A (zh) | 处理方法、终端设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21952568 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21952568 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21952568 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 07/01/2025) |