WO2020103404A1 - 一种业务处理的方法以及相关装置 - Google Patents

一种业务处理的方法以及相关装置

Info

Publication number
WO2020103404A1
WO2020103404A1 PCT/CN2019/086127 CN2019086127W WO2020103404A1 WO 2020103404 A1 WO2020103404 A1 WO 2020103404A1 CN 2019086127 W CN2019086127 W CN 2019086127W WO 2020103404 A1 WO2020103404 A1 WO 2020103404A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
terminal device
sensor
scenario
processed
Prior art date
Application number
PCT/CN2019/086127
Other languages
English (en)
French (fr)
Inventor
蒋晗
任超
钱良芳
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to KR1020217002422A priority Critical patent/KR20210022740A/ko
Priority to AU2019385776A priority patent/AU2019385776B2/en
Priority to EP19874765.1A priority patent/EP3690678A4/en
Priority to CA3105663A priority patent/CA3105663C/en
Priority to JP2021506473A priority patent/JP7186857B2/ja
Publication of WO2020103404A1 publication Critical patent/WO2020103404A1/zh
Priority to US16/992,427 priority patent/US20200372250A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons

Definitions

  • This application relates to the field of artificial intelligence, in particular to a business processing method and related devices.
  • terminal devices represented by smart phones account for an increasing proportion in people's lives.
  • a smart phone in daily life, people can use a smart phone to scan a picture carrying a two-dimensional code to realize the function of related application programs or obtain information.
  • the above smart phone scans the pictures carrying the two-dimensional code, which has defects such as complicated operation and low intelligence, which reduces the user's convenience.
  • Embodiments of the present application provide a business processing method and related apparatus, which are applied to a terminal device.
  • the terminal device can obtain data to be processed through a sensor in the terminal device, and the scene recognition model in the terminal device determines the current according to the data to be processed And determine the corresponding business processing method according to the current situation. Since the business processing method is the preset business processing method in the terminal device, it can simplify the user's operation steps, improve the intelligence of the operation, and improve the user's convenience. .
  • an embodiment of the present application provides a method for business processing, which is applied to a terminal device and includes: acquiring data to be processed, wherein the data to be processed is generated from data collected by a sensor, and the sensor includes at least an infrared image A sensor, the to-be-processed data includes at least to-be-processed image data generated from the image data collected by the infrared image sensor; the target scene corresponding to the to-be-processed data is determined through a scene recognition model, wherein the scene recognition model is a sensor
  • the data set and the scenario type set are obtained by training; the business processing method is determined according to the target scenario.
  • the terminal device collects data through a sensor deployed inside the terminal device or connected to the terminal device.
  • the sensor includes at least an infrared image sensor, and generates data to be processed based on the collected data.
  • the data to be processed includes at least infrared light.
  • Image data to be processed generated by the image data collected by the image sensor.
  • the terminal device After the terminal device obtains the data to be processed, it can determine the target scenario corresponding to the data to be processed through the scenario recognition model.
  • the scenario recognition model is obtained by training the data collection collected by the sensor and the scenario type set corresponding to different data offline. The next training is to use deep learning framework for model design and training. After the terminal device determines the current target scenario, it can determine the corresponding business processing method according to the target scenario.
  • the target scenario of the current terminal device can be determined, and the corresponding business processing method can be determined according to the target scenario.
  • the above infrared image sensor is always on. With the development of technology, the image sensor in this application may not be an infrared sensor, as long as the image can be collected, but the power consumption of the infrared sensor among the currently known sensors is low.
  • determining the target scenario corresponding to the data to be processed through the scenario recognition model includes: determining the target scenario corresponding to the data to be processed through the AI algorithm in the scenario recognition model, Among them, the AI algorithm includes a deep learning algorithm, and the AI algorithm runs in the AI processor.
  • the terminal device specifically uses the AI algorithm in the context recognition model to determine the target scenario corresponding to the data to be processed.
  • the AI algorithm contains a deep learning algorithm, which runs on the AI processor running in the terminal device. It has powerful parallel computing capabilities and high efficiency when running AI algorithms. Therefore, the scene recognition model uses AI algorithms to determine specific target scenarios.
  • the AI algorithm runs on the AI processor in the terminal device, which improves the efficiency of scene recognition. Further improve the user's convenience.
  • the senor further includes at least one of an audio collector and a first sub-sensor
  • the data to be processed includes at least audio data to be processed and first sub-data to be processed One, wherein the audio data to be processed is generated from the audio data collected by the audio collector, and the first sub-data to be processed is generated from the first sub-sensor data collected by the first sub-sensor.
  • the sensors deployed in the terminal device include, in addition to the infrared image sensor, one of the audio collector and the first sub-sensor.
  • the first sub-sensor may be an acceleration sensor, a gyroscope, an ambient light sensor, or a proximity sensor.
  • the audio collector collects audio data, and then processes the terminal device to generate audio data to be processed.
  • the first sub-sensor data is collected by the first sub-sensor, and processed by the terminal device to generate first sub-sensor data to be processed.
  • the terminal equipment uses multiple sensors to collect data in multiple dimensions, which improves the accuracy of scene recognition.
  • acquiring the data to be processed includes: when the preset operation time of image acquisition is reached, acquiring image data through the infrared image sensor, where the image data is acquired by the infrared image sensor The data to be processed; the image data to be processed is acquired by an image signal processor, wherein the image data to be processed is generated by the image signal processor according to the image data; and / or when the preset operating time of audio collection is reached, the audio
  • the collector acquires the audio data; acquires the to-be-processed audio data through an audio signal processor, wherein the to-be-processed audio data is generated by the audio signal processor according to the audio data; and / or when the first preset runtime is reached , Acquiring first sub-sensor data through the first sub-sensor, where the first sub-sensor data is data collected by the first sub-sensor; acquiring the first sub-data to be processed through the first sub-sensor processor, wherein, The first sub-sensor data to be processed is generated by the first sub-sensor processor
  • one or more of the infrared image sensor, the audio collector, and the first sub-sensor can respectively collect data corresponding to the sensor after the respective preset operating time is reached to obtain the original sensor
  • the terminal device uses the processor corresponding to the sensor to process the original sensor data to generate the sensor data to be processed.
  • the sensor is started to collect data regularly, and the collected raw data can be processed by the processor corresponding to the sensor.
  • the cache space occupied by the scene recognition model is reduced, the power consumption of the scene recognition model is reduced, and the standby time of the terminal device is improved.
  • determining the business processing mode according to the target scenario includes: if the target scenario is a scanning QR code scenario, determining the business processing mode according to the scanning QR code scenario is to start the The main image sensor of the terminal device and / or the application program supporting the function of scanning the QR code in the terminal device is started.
  • the terminal device determines, according to the data collected by one or more sensors in the terminal device, that the target scenario corresponding to the data collected by the sensor is a scan QR code scenario, and determines the corresponding Business processing methods include starting the main image sensor in the terminal device, the terminal device can use the main image sensor to scan the QR code, and the terminal device can also start the application program that supports the function of scanning the QR code, such as starting the application WeChat and opening WeChat Scan QR code function in. You can start the main image sensor and the application that supports the QR code scanning function at the same time, or you can start the main image sensor or the application that supports the QR code scanning according to a preset command or receive a user's instruction, which is not limited here.
  • the terminal device uses the data collected by the multi-dimensional sensor, and determines the target scenario as the scanning QR code scenario through the scenario recognition model. It can automatically execute related business processing methods, which improves the intelligence of the terminal device and enhances the user's convenient operation. Sex.
  • determining the business processing mode according to the target scenario includes: if the target scenario is a conference scenario, determining the business processing mode according to the conference scenario is to activate the mute mode of the terminal device and / Or start the mute function of the application in the terminal device and / or display a mute mode icon in the screen standby normal display area of the terminal device, wherein the mute mode icon is used to start the mute mode.
  • the terminal device determines that the target scenario corresponding to the data collected by the sensor is the conference scenario based on the data collected by one or more sensors in the terminal device
  • the business processing method corresponding to the conference scenario is determined, including There is a mute mode for starting the terminal device.
  • the terminal device is in the mute mode, all applications running in the terminal device are in a mute state.
  • the terminal device can also start the mute function of the application running in the terminal device, such as starting the application WeChat Mute function, at this time, the prompt sound of WeChat is switched to mute, and the mute mode icon can also be displayed on the terminal standby display area of the terminal device.
  • the terminal device can receive the user's mute operation instruction through the mute mode icon.
  • the command starts the silent mode.
  • the terminal device uses the data collected by the multi-dimensional sensors, and determines the target scenario as the conference scenario through the scenario recognition model. It can automatically execute related business processing methods, which improves the intelligence of the terminal device and enhances the user's convenience of operation.
  • determining the business processing method according to the target scenario includes: determining the business processing method according to the target scenario, including: if the target scenario is a sports scenario, determining the business scenario according to the sports scenario
  • the service processing method is to start the motion mode of the terminal device and / or start the motion mode function of the application program in the terminal device and / or display a music playback icon on the screen standby normal display area of the terminal device, wherein the motion of the terminal device
  • the mode includes a step counting function, and the music playing icon is used to start playing or pause playing music.
  • the terminal device determines that the target scenario corresponding to the data collected by the sensor is a sports scenario based on the data collected by one or more sensors in the terminal device
  • the business processing method corresponding to the sports scenario is determined, including There is a motion mode for starting the terminal device.
  • the terminal device starts the pedometer application and the physiological data monitoring application, and records the user's steps and related physiological data by using relevant sensors in the terminal device.
  • the terminal device can also start the motion mode function of the application program in the terminal device, for example, the motion function of the application NetEase Cloud Music.
  • the playback mode of NetEase Cloud Music is the sports mode, and it can also be displayed in the standby display area of the screen of the terminal device
  • the music playing icon the terminal device can receive the user's music playing instruction through the music playing icon, and the terminal device starts playing or pauses playing music in response to the music playing instruction.
  • the terminal equipment uses the data collected by the multi-dimensional sensors to determine the target scene as a sports scene through the scene recognition model, and can automatically execute the relevant business processing methods, which improves the intelligence of the terminal equipment and enhances the user's convenience of operation.
  • determining the business processing method according to the target scenario includes: determining the business processing method according to the target scenario, including: if the target scenario is a driving scenario, determining the business scenario according to the driving scenario
  • the business processing method is to start the driving mode of the terminal device and / or start the driving mode function of the application in the terminal device and / or display the driving mode icon on the screen standby normal display area of the terminal device, wherein the driving of the terminal device
  • the mode includes a navigation function and a voice assistant, and the driving mode icon is used to start the driving mode.
  • the business processing method corresponding to the driving scenario is determined, including There is a driving mode for starting the terminal device.
  • the terminal device starts a voice assistant, the terminal device can perform related operations according to a voice instruction input by the user, and the terminal device can also start a navigation function.
  • the terminal device can also start the driving mode function of the application program in the terminal device, for example, the driving function of the application Gaode map.
  • the navigation mode of NetEase Cloud Music is the driving mode, and it can also be displayed in the screen standby normal display area of the terminal device
  • the driving mode icon the terminal device can receive the user's driving mode instruction through the driving mode icon, and the terminal device starts the driving mode in response to the driving mode instruction.
  • the terminal device uses the data collected by the multi-dimensional sensors, and after determining the target scenario as the driving scenario through the scenario recognition model, it can automatically execute related business processing methods, which improves the intelligence of the terminal device and improves the user's convenience of operation.
  • an embodiment of the present application provides a terminal device, including: a sensor and a processor, where the sensor includes at least an infrared image sensor; the processor is used to obtain data to be processed, and the data to be processed is determined by the The data collected by the sensor is generated, and the to-be-processed data includes at least the to-be-processed image data generated by the image data collected by the infrared image sensor; the processor is also used to determine the corresponding to-to-be-processed data through a scene recognition model A target scenario, wherein the scenario recognition model is obtained by training the sensor data set and the scenario type set acquired by the sensor; the processor is also used to determine a business processing method according to the target scenario. The processor is also used to execute the business processing method as described in the first aspect.
  • an embodiment of the present application provides a business processing apparatus.
  • the business processing apparatus is applied to a terminal device and includes: an acquiring unit for acquiring data to be processed, wherein the data to be processed is generated from data collected by a sensor ,
  • the sensor includes at least an infrared image sensor, and the to-be-processed data includes at least to-be-processed image data generated from image data collected by the infrared image sensor; a determining unit, configured to determine the to-be-processed data corresponding to the scene recognition model
  • the target scenario of the scenario wherein the scenario recognition model is trained from the sensor data set and the scenario type set; the determining unit is also used to determine the business processing method according to the target scenario.
  • a possible implementation manner of the third aspect includes: the determining unit, specifically configured to determine the target scenario corresponding to the data to be processed through an AI algorithm in the scenario recognition model, wherein the AI algorithm includes depth Learning algorithm, the AI algorithm runs in the AI processor.
  • a possible implementation manner of the third aspect includes: the sensor further includes at least one of an audio collector and a first sub-sensor, and the data to be processed includes at least the audio data to be processed and the first sub-process One of the data, wherein the to-be-processed audio data is generated from the audio data collected by the audio collector, and the first to-be-processed sub-data is generated from the first sub-sensor data collected by the first sub-sensor.
  • the method includes: the acquiring unit is specifically configured to acquire image data through the infrared image sensor when the preset operation time of image acquisition is reached, wherein the image data is the The data acquired by the infrared image sensor; the acquisition unit is specifically configured to acquire the image data to be processed through an image signal processor, wherein the image data to be processed is generated by the image signal processor based on the image data; and / or The obtaining unit is specifically used to obtain the audio data through the audio collector when the preset time for audio collection is reached; the obtaining unit is specifically used to obtain the to-be-processed audio data through an audio signal processor, wherein, The to-be-processed audio data is generated by the audio signal processor according to the audio data; and / or the acquiring unit is specifically configured to acquire the first sub-sensor through the first sub-sensor when the first preset running time is reached Sensor data, wherein the first sub-sensor data is data collected by the first sub-sensor; the acquiring unit is specifically configured to acquire the first sub-sensor through the first sub-sensor when the
  • the method includes: the determining unit, specifically configured to determine the target scenario according to the scanned two-dimensional code scenario if the determined unit determines that the target scenario is a scanned two-dimensional code scenario
  • the service processing method is to start the main image sensor of the terminal device and / or start an application program supporting the function of scanning a two-dimensional code in the terminal device.
  • a possible implementation manner of the third aspect includes: the determining unit, which is specifically configured to determine that the business processing method is to start the business process according to the meeting scenario if the determining unit determines that the target scenario is a meeting scenario
  • a possible implementation manner of the third aspect includes: the determining unit, which is specifically configured to determine that the business processing method is to start the business processing mode according to the sports scenario if the determining unit determines that the target scenario is a sports scenario
  • the music play icon is used to start playing or pause playing music.
  • a possible implementation manner of the third aspect includes: the determining unit, which is specifically configured to determine that the business processing method is to start the business process according to the driving scenario if the determining unit determines that the target scenario is a driving scenario
  • an embodiment of the present application provides a computer program product containing instructions, which, when the computer program product runs on a computer, causes the computer to execute the method for processing a storage block as described in the first aspect.
  • an embodiment of the present application provides a computer-readable storage medium that stores instructions for packet processing, and when it runs on a computer, causes the computer to perform the operations described in the first aspect Storage block processing method.
  • the present application provides a chip system including a processor for supporting network devices to implement the functions involved in the above aspects, for example, for example, sending or processing data and / or data involved in the above methods information.
  • the chip system further includes a memory, which is used to store necessary program instructions and data of the network device.
  • the chip system may be composed of chips, or may include chips and other discrete devices.
  • the present application provides a method for business processing.
  • the method is applied to a terminal device, and the terminal device is equipped with a normally-open image sensor.
  • the method includes: acquiring data, where the data includes all The image data collected by the image sensor; determining the target scenario corresponding to the data through a scenario recognition model, wherein the scenario recognition model is obtained by training the sensor data set and the scenario type set; determining the business according to the target scenario Processing method.
  • the present application provides a terminal device configured with a normally-open image sensor, and the terminal device is used to implement the method described in any of the foregoing implementation manners.
  • the terminal device can obtain the data to be processed through the sensor in the terminal device, and the scenario recognition model in the terminal device determines the current scenario according to the data to be processed and determines the corresponding business processing method according to the current scenario. It is a preset method of processing business in the terminal device, so it can simplify the user's operation steps, improve the intelligence of the operation, and improve the user's convenience.
  • the terminal device is specifically a smart phone. When the smart phone is off and needs to scan a picture carrying a two-dimensional code, the smart phone can automatically realize the function of related applications or obtain information without additional operations, improving user convenience. degree.
  • FIG. 1a is a schematic diagram of a system architecture in an embodiment of the present application.
  • FIG. 1b is a schematic diagram of another system architecture in an embodiment of the present application.
  • FIG. 2 is a schematic diagram of usage scenarios involved in the service processing method provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of an embodiment of a service processing method provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of an embodiment of an application program intelligently starting provided by an embodiment of the present application
  • FIG. 5 is a schematic diagram of an embodiment of an intelligent recommendation service provided by an embodiment of this application.
  • FIG. 6 is a schematic flowchart of an application scenario of a method for business processing in an embodiment of the present application
  • FIG. 7 is a schematic structural diagram of a computer system provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an AI processor provided by an embodiment of this application.
  • FIG. 9 is a schematic diagram of an embodiment of a service processing device in an embodiment of the present application.
  • the present application provides a business processing method and related apparatus.
  • the terminal device can obtain data to be processed through a sensor in the terminal device, and the scene recognition model in the terminal device determines the current scenario based on the data to be processed and determines the correspondence according to the current scenario Since the business processing method is a method for processing business preset in the terminal device, it can simplify the user's operation steps, increase the intelligence of the operation, and improve the user's convenience.
  • processors also called cores or computing units
  • these cores constitute a processor.
  • the cores in the embodiments of the present application mainly relate to heterogeneous cores, and the types of these cores include but are not limited to the following:
  • the central processing unit is a very large-scale integrated circuit, which is the computing core and control unit of a computer. Its function is mainly to interpret computer instructions and process data in computer software.
  • GPUs Graphics processors
  • display cores also known as display cores, visual processors, and display chips
  • microprocessor for image calculation work.
  • DSP Digital signal processor
  • DSP refers to a chip that can implement digital signal processing technology.
  • the DSP chip uses a Harvard structure with separate programs and data. It has a dedicated hardware multiplier, is widely used in pipeline operations, and provides special DSP instructions that can be used to quickly implement various digital signal processing algorithms.
  • ISP Image signal processor
  • main function is to post-process the data output by the image sensor
  • main function is linear Correction, noise removal, dead pixel correction, interpolation, white balance, and automatic exposure.
  • ASP Audio signal processor
  • ASP refers to a chip that can realize audio signal processing calculation
  • ASP is a kind of DSP chip
  • the main function is to post-process the data output by the audio collector
  • the main function is sound Source localization, sound source enhancement, echo cancellation, and noise suppression technology.
  • AI processor artificial intelligence
  • AI processors also known as artificial intelligence processors or AI accelerators, are processing chips that run artificial intelligence algorithms. They are usually implemented with application-specific integrated circuits (ASICs) or field-programmable gate arrays (field-programmable Gate (array, FPGA) can also be implemented with GPU, which is not limited here, AI processor uses a systolic array structure, in this array structure, the data is processed in the array according to a predetermined "pipeline” method Rhythm "flow" between units. In the process of data flow, all processing units simultaneously process the data flowing through it in parallel, so it can achieve a high parallel processing speed.
  • ASICs application-specific integrated circuits
  • FPGA field-programmable Gate
  • GPU which is not limited here
  • AI processor uses a systolic array structure, in this array structure, the data is processed in the array according to a predetermined "pipeline” method Rhythm "flow" between units. In the process of data flow, all processing units simultaneously process the data flowing through it in parallel, so it can achieve a high parallel processing
  • the AI processor may specifically be a neural network processor (neural-network processing unit, NPU), a tensor processor (tensor processing unit, TPU), an intelligence processor (intelligence processing unit, IPU), and a GPU.
  • NPU neural-network processing unit
  • TPU tensor processing unit
  • IPU intelligence processing unit
  • GPU GPU
  • Neural network processor neural-network processing unit, NPU
  • NPU simulates human neurons and synapses at the circuit layer, and uses deep learning instruction sets to directly process large-scale neurons and synapses.
  • One instruction completes a group Processing of neurons.
  • the NPU realizes the integration of storage and calculation through synaptic weights, thereby greatly improving the operating efficiency.
  • TPU Tensor processor
  • IPU intelligent processor
  • sensors are provided on the terminal device, and the terminal device obtains external information through these sensors.
  • the sensors involved in the embodiments of the present application include but are not limited to the following types:
  • IR-RGB image sensor Infrared image-radiation-red green image sensor (IR-RGB image sensor), using CCD unit (charge-coupled device) or standard CMOS unit (complementary meta-oxide semiconductor, complementary metal oxide (Semiconductor), filtering through the filter, only allowing light in the color wavelength band and the set infrared wavelength band to separate the IR (infrared radiation) image data stream and RGB (red green) blue, three primary colors in the image signal processor ) Image data stream, IR image data stream is the image data stream obtained under low-light environment, and the two separated image data streams are used for other application processing.
  • CCD unit charge-coupled device
  • CMOS unit complementary meta-oxide semiconductor, complementary metal oxide (Semiconductor)
  • Acceleration sensor acceleration sensor
  • the acceleration sensor is used to measure the acceleration change value of the object, usually measured from three directions of X, Y and Z
  • the size of the X direction value represents the horizontal movement of the terminal device
  • the size of the Y direction value It represents the vertical movement of the terminal device
  • the magnitude of the Z direction value represents the spatial vertical movement of the terminal device.
  • it is used to measure the speed and direction of movement of the terminal device, for example: when the user is holding the terminal device, it will swing up and down, so that it can detect that the acceleration changes back and forth in a certain direction, by detecting this The number of steps can be calculated back and forth.
  • Gyroscope a gyroscope is a sensor that measures the angular velocity of an object around a certain axis of rotation.
  • the gyroscope used in the terminal device is a micro-electro-mechanical-systems gyroscope , MEMS gyroscope), the common MEMS gyroscope chip is a three-axis gyroscope chip, which can track the displacement changes in 6 directions.
  • the three-axis gyroscope chip can obtain the change value of the angular acceleration of the terminal device in the three directions of x, y, and z, and is used to detect the rotation direction of the terminal device.
  • Ambient light sensor is a sensor that measures the change of the outside light, and measures the change of the outside light intensity based on the photoelectric effect. It is used in terminal equipment to adjust the brightness of the display screen of the terminal equipment. And because the display screen is usually the most power-consuming part of the terminal device, the use of ambient light sensors to help adjust the brightness of the screen can further extend the battery life.
  • Proximity sensor proximity sensor
  • the proximity light sensor consists of an infrared emission lamp and infrared radiation light detector.
  • the proximity light sensor is located near the earpiece of the terminal device. When the terminal device is close to the ear, the system uses the proximity light sensor to know that the user is talking on the phone, and then turns off the display screen to prevent the user from affecting the call due to misoperation.
  • the working principle of the proximity light sensor is that the invisible infrared light emitted by the infrared emission lamp is reflected by the nearby objects and then detected by the infrared radiation light detector. Generally, the invisible infrared light emitted adopts the near infrared spectrum band.
  • Geomagnetic sensor a type of geomagnetic sensor is a kind of use of the measured object's movement state in the geomagnetic field is different, because the magnetic flux distribution of the geomagnetic field in different directions is different, so it can be changed by sensing the distribution of the geomagnetic field
  • a measuring device that indicates information such as the attitude and motion angle of the measured object. Generally used in the compass or navigation application of the terminal device, it helps the user to achieve accurate positioning by calculating the specific orientation of the terminal device in the three-dimensional space.
  • the service processing method provided in the embodiments of the present application can be applied to a terminal device.
  • the terminal device can be said to be a mobile phone, a tablet personal computer, a laptop computer, a digital camera, and a personal digital assistant. assistant (PDA for short), navigation device, mobile internet device (MID), wearable device (wearable device), smart watch, smart bracelet, etc.
  • PDA personal digital assistant
  • MID mobile internet device
  • MID wearable device
  • smart watch smart bracelet
  • the system that the terminal device can carry can include Or other operating systems, etc., this embodiment of the present application does not make any limitation on this.
  • FIG. 1a is a schematic diagram of a system architecture in an embodiment of the present application.
  • the terminal device can be logically divided into a hardware layer, an operating system, and an application layer.
  • the hardware layer includes hardware resources such as a main processor, a microcontroller unit, a modem, a Wi-Fi module, a sensor, and a positioning module.
  • the application layer includes one or more applications, such as applications, which can be any type of applications such as social applications, e-commerce applications, browsers, multimedia applications, and navigation applications, as well as scene recognition models and Applications such as artificial intelligence algorithms.
  • the operating system is an application program that manages and controls hardware and software resources.
  • the hardware layer in addition to the main processor, sensor, Wi-Fi module and other hardware resources, it also includes an always-on (always on, AO) area.
  • the hardware in the always-on area is usually turned on 24/7, and the Including sensor control center (sensor hub), AI processor and sensor and other hardware resources, sensor hub contains coprocessor and sensor processor, sensor processor is used to process sensor output data, AI processor and sensor processor generated After the data is further processed by the co-processor, the co-processor establishes interaction with the main processor.
  • the sensors in the always-on zone include: infrared image sensors, gyroscopes, acceleration sensors, and audio collectors (mic), etc.
  • the sensor processors include: mini image signal processor (mini) ISP and audio signal processor (ASP) ).
  • FIG. 1b is a schematic diagram of another system architecture in an embodiment of the present application.
  • the operating system includes a kernel, a hardware abstraction layer (HAL), a library and a runtime, and a framework.
  • the kernel is used to provide low-level system components and services, such as: power management, memory management, thread management, hardware drivers, etc .; hardware drivers include Wi-Fi drivers, sensor drivers, positioning module drivers, etc.
  • the hardware abstraction layer encapsulates the kernel driver, provides an interface to the framework, and shields low-level implementation details.
  • the hardware abstraction layer runs in user space, while the kernel driver runs in kernel space.
  • the library and runtime are also called runtime libraries, which provide the library files and execution environment required by the executable program at runtime.
  • Libraries and runtimes include Android runtime (ART) and libraries.
  • ART is a virtual machine or virtual machine instance that can convert the bytecode of an application into machine code.
  • the library is a program library that provides support for executable programs at runtime, including browser engines (such as webkit), script execution engines (such as JavaScript engines), and graphics processing engines.
  • the framework is used to provide various basic common components and services for applications in the application layer, such as window management, location management, and so on.
  • the framework may include a phone manager, resource manager, location manager, etc.
  • each component of the operating system described above can be implemented by the main processor executing a program stored in the memory.
  • the terminal may include fewer or more components than those shown in FIGS. 1a and 1b, and the terminal device shown in FIGS. 1a and 1b only includes multiple components disclosed in the embodiments of the present application. Components that are more relevant to implementation.
  • FIG. 2 is a schematic diagram of a usage scenario involved in the service processing method provided by an embodiment of the present application.
  • a processor is provided on the terminal device, and the processor includes at least two cores.
  • the at least two cores may include CPU and AI processor.
  • AI processors include but are not limited to neural network processors, tensor processors and GPUs. These chips can be called cores and are used to perform calculations on terminal devices. Among them, different cores have different energy efficiency ratios.
  • the terminal device can use specific algorithms to perform different application services.
  • the method of the embodiment of the present application involves running a scenario recognition model.
  • the terminal device can use the scenario recognition model to determine the target scenario where the user currently using the terminal device is located, and according to the determined target scenario Perform different business processes.
  • the terminal device determines the target scenario where the user currently using the terminal device is located, it will determine different target scenarios based on the data collected by different sensors and the AI algorithm in the scenario recognition model.
  • the embodiments of the present application provide a business processing method.
  • the following embodiments of the present application mainly determine the target scenario where the terminal device is located and the corresponding target scenario for the data collected by the terminal device according to different sensors and the scenario recognition model Business processing.
  • FIG. 3 is a schematic diagram of an embodiment of a service processing method provided by an embodiment of the present application.
  • processing methods include:
  • the terminal device starts a timer connected to the sensor, and the timer is used to indicate a time interval for the sensor connected to it to collect data.
  • the coprocessor in the AO area sets the timing of timers corresponding to different sensors according to the requirements of the scene recognition model.
  • the timer corresponding to the acceleration sensor can be set to 100 milliseconds (ms), meaning that the acceleration data is collected every 100 ms and stored in the buffer area specified by the terminal device.
  • the timing time here can be set according to the requirements of the scene recognition model, and can also be set according to various requirements such as sensor life, cache space occupancy rate and power consumption.
  • the infrared image sensor itself can Infrared images with a higher frame rate are collected, but long-term continuous collection will cause damage to the sensor itself and affect the lifespan.
  • continuous acquisition for a long time will cause the power consumption of the infrared image sensor to increase, reducing the use time of the terminal device.
  • the timer time of the timer connected to the infrared image sensor can be set, for example: in the face recognition scenario, the time of image acquisition can be set to 1/6 second, that is, per second Collect 10 frames of images; in other recognition scenarios, you can set the timing of collecting images to 1 second, that is, 1 frame of images per second. It can also be: when the terminal device is in the low-battery mode, the timing time is set to 1 second, so as to extend the use time of the terminal device. For some sensors with low power consumption and small storage space occupied by the collected data, the timing of the sensor may not be set to achieve the purpose of collecting data in real time.
  • timer may be a chip with a timing function connected to the sensor, or a timing function built in the sensor, which is not limited herein.
  • the timer after the timer reaches the timing time, it instructs the connected sensor to start and collect data.
  • the specific sensors that need to collect data are selected by the coprocessor based on the scenario recognition model. For example, when it is necessary to determine whether it is currently in a QR code scanning scenario, the terminal device acquires data through an infrared image sensor, and after processing and computing the data collected by the infrared image sensor, the scene recognition process can be completed.
  • the terminal device also needs to use the audio collector to collect data. The data collected by the infrared image sensor and the audio collector After processing and calculation, the scene recognition process can be completed.
  • the infrared image sensor collects image data, and the image data includes an IR image and an RGB image, where the IR image is a grayscale image, which can be used to display low light
  • the RGB image is a color image
  • the infrared image sensor stores the collected image data in the buffer space for subsequent steps.
  • the first application scenario is that the first infrared image sensor is arranged in the same plane as the main screen of the terminal device in the terminal device;
  • the second application scenario The second infrared image sensor is arranged in the terminal device in the same plane as the main image sensor of the terminal device. The two cases are introduced below.
  • the first infrared image sensor can collect image data projected onto the main screen of the terminal device, for example, when the user uses the terminal device to perform a self-timer operation, the first infrared image sensor is arranged in the same plane as the main screen of the terminal device.
  • An infrared image sensor can collect image data of the user's face.
  • the second infrared image sensor can collect image data projected onto the main image sensor of the terminal device.
  • the main image sensor of the terminal device when a user uses the main image sensor of the terminal device to scan a two-dimensional code, it is arranged on the main device of the terminal device.
  • the second infrared image sensor in the same plane as the image sensor can collect two-dimensional code image data.
  • the first infrared image sensor and the second infrared ray sensor may be arranged at the same time, and the arrangement manner and the data collection manner are similar to the foregoing manners, and will not be repeated here.
  • the audio collector can be arranged at any position on the casing of the terminal device.
  • the audio data of the environment where the terminal device is located is collected at a sampling frequency of 16 kHz.
  • the acceleration sensor is arranged in the always area inside the terminal equipment, using a two-wire serial bus interface (inter-integrated circuit, I2C) or serial peripheral interface (serial peripheral interface, SPI) SPI and sensor hub connected, generally provided
  • I2C inter-integrated circuit
  • SPI serial peripheral interface
  • the acceleration measurement range of ⁇ 2 gravity (G) to ⁇ 16 gravity (G), the accuracy of the collected acceleration data is less than 16 bits.
  • the data collected by the sensor can be directly sent to the sensor processor or the scene recognition model for processing, or can be stored in the cache area.
  • the sensor processor or the scene recognition model can be processed by reading the sensor data in the cache area , Not limited here.
  • the sensor processor processes the data
  • the collected data can be generated by a sensor processor corresponding to the sensor, also known as a digital signal processor corresponding to the sensor, after data preprocessing is performed for subsequent scene recognition The pending data used by the model.
  • miniISP After acquiring the image data collected by the infrared image sensor, the miniISP processes the image data, for example, when the resolution of the image data collected by the sensor (image resolution) When it is 640 pixels by 480 pixels, miniISP can compress the image data to generate 320 pixels by 240 pixels to be processed image data. miniISP can also perform automatic exposure (AE) processing on the image data. In addition to the above processing methods, miniISP can also be used to automatically select the image to be processed in the image data according to the brightness information contained in the image data. For example, when miniISP determines that the current image is acquired in a low-light environment, because the IR image contains The low-light environment of the image has more detailed information than the RGB image, so the IR image in the image data is selected for processing.
  • image resolution image resolution
  • miniISP can compress the image data to generate 320 pixels by 240 pixels to be processed image data.
  • miniISP can also perform automatic exposure (AE) processing on the image data.
  • miniISP can also be used
  • Step 303 is an optional step.
  • the terminal device uses the data collected by the sensor and / or the to-be-processed data processed by the sensor processor to determine the corresponding target scenario according to the scenario recognition model.
  • the scene recognition model runs on the coprocessor and the AI processor, and the AI algorithm in the scene recognition model runs on the AI processor.
  • the direction and sequence of data flow in the scene recognition model are different, for example: for the image data to be processed generated by the miniISP based on the image data processing and the audio data generated by the ASP based on the audio data , First loaded into the AI algorithm running on the AI processor in the scenario recognition model, and then the coprocessor determines the target scenario according to the calculation result of the AI processor.
  • the acceleration data collected and generated by the acceleration sensor is processed by the coprocessor first, and then loaded into the AI algorithm running on the AI processor in the scene recognition model. Finally, the coprocessor determines the target scenario according to the calculation result of the AI processor.
  • the scene recognition model consists of two parts: the first part is the AI algorithm, which contains the data set collected by the sensor and the data set to be processed after processing by the sensor processor, and the neural network model is trained offline.
  • the second part is to determine the target scenario based on the result of the AI algorithm operation, which is completed by the coprocessor.
  • a convolutional neural network CNN
  • a deep neural network DNN
  • RNN recurrent neural network
  • LSTM Long-short-term memory network
  • CNN is a feed-forward neural network. Its artificial neurons can respond to the surrounding units in a part of the coverage area, and have excellent performance for large-scale image processing.
  • the CNN consists of one or more convolutional layers and a fully connected layer at the top (corresponding to a classic neural network), and also includes associated weights and a pooling layer. This structure enables CNN to utilize the two-dimensional structure of the input data.
  • the convolution kernel of the convolutional layer in the CNN will convolve the image. Convolution is to scan the image with a filter of a specific parameter to extract the feature value of the image.
  • Offline training refers to model design and training on deep learning frameworks such as tensorflow, caffe (convolutional architecture for fast features).
  • infrared image sensor Take infrared image sensor as an example.
  • scene recognition models that can use infrared image data in the terminal device, for example: scan QR code scene recognition model, scanned code scene recognition model, and selfie scene recognition model, etc.
  • scene recognition models can be applied in the terminal device The scene recognition model will be introduced separately below.
  • the neural network model obtained by offline training loaded in the AI processor uses the CNN algorithm to collect 100,000 two-dimensional code images and 100,000 non-two-dimensional code images through the sensor, and Marked separately (with QR code or without QR code), after training on tensorflow, the neural network model and related parameters are obtained, and then the image data collected by the second infrared image sensor is input into the neural network model for network derivation , You can get the result of whether the image contains a two-dimensional code.
  • the scan QR code scene recognition model can also identify the terminal Whether the image acquired by the device contains barcode and other results.
  • the neural network model obtained by the offline training loaded in the AI processor uses the CNN algorithm to collect 100,000 images containing the scanning device and 100,000 images without the scanning code through the sensor
  • the image of the device, the image containing the code scanning device is the image data collected by the sensor and includes the scanning part of the wearable device such as the code scanner, the code scanning gun, the smart phone, and the smart bracelet.
  • the smart phone Taking the smart phone as an example, When the image contains the main image sensor part of the smartphone, the image is the image containing the code scanning device.
  • the neural network model obtained by offline training in the AI processor uses the CNN algorithm to collect 100,000 images containing human faces and 100,000 images without human faces through sensors.
  • the image containing the human face is an image containing part or all of the human face, and respectively labeled (with or without human face), after training on tensorflow, the neural network model and related parameters are obtained, and then the first infrared image sensor
  • the collected image data is input into the neural network model for network derivation, and it can be obtained whether the image contains a face.
  • audio data collected by the audio collector can also be used to determine the target scenario.
  • image data, audio data, and acceleration data may be applied to determine whether the current scene of the terminal device is a sports scene. Use a variety of data to determine whether the current terminal device is in a driving scenario.
  • the co-processor may determine the business processing method corresponding to the target scenario, or the co-processor may send the determined target scenario to the main processor for processing by the main processor
  • the processor determines the business processing method corresponding to the target scenario.
  • the business processing method is determined according to the driving scenario to start the driving mode of the terminal device and / or start the driving of the application program in the terminal device
  • the mode function and / or the driving mode icon is displayed on the screen standby normal display area of the terminal device, wherein the driving mode of the terminal includes a navigation function and a voice assistant, and the driving mode icon is used to start the driving mode.
  • Starting the driving mode of the terminal device and starting the driving mode function of the application program in the terminal device are steps performed by the main processor, and displaying a driving mode icon on the screen standby normal display area of the terminal device is a step performed by the coprocessor.
  • a method for business processing uses a variety of sensors such as a traditional sensor, an infrared image sensor, and an audio collector to collect external multi-dimensional information, thereby improving the terminal device's perception capability.
  • the AI processor is a dedicated chip optimized for the AI algorithm, the use of the AI processor by the terminal device can greatly increase the running speed of the AI algorithm and reduce the power consumption of the terminal device. Since the coprocessor runs on the always area of the terminal device and does not need to turn on the main processor to work, the scene recognition can still be performed even when the terminal device is in the off-screen state.
  • the terminal device realizes the target scenario and the business processing method corresponding to the target scenario in different scenarios based on different embodiments.
  • FIG. 4 is a schematic diagram of an embodiment of an application program intelligently started provided by an embodiment of the present application. Examples include:
  • step 401 is similar to step 301 in FIG. 3 and will not be repeated here.
  • step 402 is similar to step 302 in FIG. 3 and will not be repeated here.
  • the sensor processor processes the data
  • step 403 is similar to step 303 in FIG. 3 and will not be repeated here.
  • the method for determining whether it is a target scenario based on the data collected by the sensor is similar to the method in step 304 in FIG. 3, and details are not described here.
  • step 405 is entered; if the terminal device determines that the scenario in which the terminal device is located is not the target scenario based on the currently acquired data, step 401 is entered , Waiting to acquire and process the data collected by the next sensor.
  • the terminal device after the terminal device determines the target scenario in which the current terminal device is located according to the data collected by the sensor, the terminal device can start the target application corresponding to the target scenario.
  • the terminal device when the terminal device determines that the current scenario is a sports scenario, the terminal device can start a navigation application, such as Gaode Map, etc., and can also start a health monitoring application to monitor the physiological data of the terminal device user, and can also start music playback. App and play music automatically.
  • a navigation application such as Gaode Map, etc.
  • a health monitoring application to monitor the physiological data of the terminal device user
  • music playback App and play music automatically.
  • the terminal device when the terminal device obtains that the current image contains the code scanning device, the terminal device can determine that the current terminal device is in the scenario of scanning the two-dimensional code. Sensor and launch the home screen, such as the camera application. Or open the application with the QR code scanning function and further open the QR code scanning function in the application, for example, open the "scan" function in the browser application, where the "scan" function is used for scanning Two-dimensional code image and provide the scanned data to the browser for use.
  • the terminal device when the terminal device obtains that the current image contains the code scanning device, the terminal device can determine that the current terminal device is in the scanned code scenario. At this time, the terminal device can open the application program with the two-dimensional code and / or bar code. After opening the home screen of the terminal device, the QR code and / or bar code of the application program is displayed on the home screen. For example, when the terminal device determines that the current image contains a barcode scanning device, it opens the home screen of the terminal device and displays the payment QR code and / or barcode of the payment application, which may be Alipay or WeChat.
  • the terminal device can determine that the current terminal device is in a self-portrait scene. At this time, the terminal device can activate the secondary image sensor in the same plane as the main screen, and automatically turn on the secondary image sensor Applications such as the self-timer function in the camera application and start the home screen, displaying the self-timer function interface in the camera application on the home screen.
  • the terminal device can automatically recognize the current scene based on the use of the infrared image sensor, and intelligently start the application corresponding to the target scene according to the recognized scene. Increased user convenience.
  • FIG. 5 is a schematic diagram of an embodiment of an intelligent recommendation service provided by an embodiment of the present application, and an embodiment of an intelligent recommendation service provided by an embodiment of the present application include:
  • step 501 is similar to step 301 in FIG. 3 and will not be repeated here.
  • step 502 is similar to step 302 in FIG. 3 and will not be repeated here.
  • the sensor processor processes the data
  • step 503 is similar to step 303 in FIG. 3 and will not be repeated here.
  • the method for determining whether it is a target scenario based on the data collected by the sensor is similar to the method in step 304 in FIG. 3, and details are not described here.
  • step 505 is entered; if the terminal device determines that the scenario in which the terminal device is located is not the target scenario based on the currently acquired data, step 501 is entered , Waiting to acquire and process the data collected by the next sensor.
  • the terminal device may recommend a target service corresponding to the target scenario.
  • the specific methods of recommending target services are introduced below.
  • the terminal device determines the target scenario that the terminal device is in, it can recommend the target service corresponding to the target scenario to the terminal device user, including displaying the function entrance of the target service in the always on display (AOD) area of the terminal device, In the AOD area of the terminal device, the program entry of the application included in the target service, the automatic start of the target service, and the automatic start of the application included in the target service are displayed.
  • AOD always on display
  • the terminal device may The mute icon is displayed in the area, and the terminal device can start the mute function by receiving the user's operation instruction on the mute icon.
  • the mute function is to set the volume of all applications in the terminal device to 0.
  • the terminal device can start the vibration function by receiving the user's operation instruction on the vibration icon.
  • the vibration function is to set the volume of all applications in the terminal device. Set to 0 and set the alert sound of all applications in the terminal device to vibrate.
  • the terminal device fails to receive the operation instruction of the corresponding icon in the AOD area for a period of time, such as 15 minutes, the terminal device may automatically activate the mute function or the vibration function.
  • the terminal device may display the music playback application icon in the AOD area, and the terminal device may start the music playback application by receiving a user's operation instruction on the music playback application icon.
  • the terminal device can perform service recommendation in a low power consumption state such as a screen off state, and can use various sensor data such as images, audio, and acceleration data as a basis for situational awareness data, and improve situational awareness through deep learning algorithms Accuracy. Increased user convenience.
  • FIG. 6 is a schematic flowchart of an application scenario of a method for business processing in an embodiment of the present application, and a business in the embodiment of the present application
  • Application scenarios of processing methods include:
  • step S1 when the terminal device is connected to the peer device via Bluetooth, the user can mark whether the peer device currently connected via Bluetooth is a car. After the peer device is marked as a car, each time the terminal device is connected to the peer device via Bluetooth, the terminal device can confirm that the currently connected peer device is a car.
  • the coprocessor in the AO area of the terminal device obtains the Bluetooth connection status of the terminal device at intervals of 10 seconds, generally 10 seconds;
  • step S2 is it connected to the car Bluetooth
  • the terminal device After the terminal device obtains the current Bluetooth connection status, it can know whether there is a peer device connected through Bluetooth in the current terminal device, and if there is a peer device connected with Bluetooth, it is further confirmed whether the current Bluetooth connected peer device has a car set by the user Logo, if the peer device has a car logo set by the user, you can confirm that the current terminal device is connected to the car Bluetooth, go to step S8, if the current terminal device Bluetooth status is not connected or the Bluetooth-connected peer device does not have the user set car logo , Then go to step S3;
  • step S3 the terminal device obtains relevant data of the taxi software running in the terminal device, and confirms whether the current taxi software is started according to the relevant data of the taxi software, that is, whether the current user uses the taxi software. If it is confirmed that the current user uses the ride-hailing software according to the relevant data of the ride-hailing software, step S9 is entered, and if it is confirmed that the current user does not use the ride-hailing software according to the relevant data of the ride-hailing software, step S4 is entered;
  • step S4 the terminal device uses an acceleration sensor and a gyroscope to collect acceleration data and angular velocity data, and pre-processes the collected acceleration data and angular velocity data, including: resampling the data, such as the original acceleration collected by the acceleration sensor
  • the sampling rate of the data is 100 hertz (hz)
  • the sampling rate of the acceleration data obtained after data resampling is 1 hertz.
  • the sampling rate of the data obtained after the specific resampling is determined by the sampling rate of the neural network model applied in the scene recognition model The decision is generally consistent with the sample sampling rate.
  • the RAM includes double-rate synchronous dynamic random access memory (double data) rate (DDR), DDR2, DDR3, DDR4, and future upcoming DDR5;
  • DDR double-rate synchronous dynamic random access memory
  • step S5 the scenario recognition model in the terminal device obtains the pre-processed acceleration data and angular velocity data stored in the RAM, and the scenario recognition model confirms whether the current terminal device is in a driving scenario according to the pre-processed acceleration data and angular velocity data. If yes, go to step S6, if no, go to step S9;
  • step S6 after the terminal device confirms that the current terminal device is in the driving scenario based on the acceleration data and angular velocity data, since the scenario recognition results based on the acceleration data and angular velocity data are not highly reliable, further sensor data needs to be obtained for scenario recognition .
  • the terminal device acquires the image data collected by the infrared image sensor and the audio data collected by the audio collector, and stores the collected image data and audio data in the RAM of the terminal device, or the collected image data and audio data go through miniISP After processing corresponding to ASP, store the processed image data and audio data in the RAM of the terminal device;
  • step S7 the terminal device acquires the image data and audio data in the RAM, and loads the image data and audio data into the scenario recognition model to perform scenario recognition, and confirms whether the current terminal device is in a driving scenario based on the image data and audio data. If yes, go to step S8, if no, go to step S9;
  • step S8 the terminal device displays a driving situation icon in the AOD area, the driving situation icon is the driving situation function entrance of the terminal device, and when the terminal device receives an operation instruction triggered by the user through the driving situation icon, the terminal device starts the driving situation Modes include: starting the navigation application, amplifying the font size of the terminal device display characters, and starting the voice operation assistant, which can control the operation of the terminal device according to the user's voice instructions, such as dialing the phone number according to the user's voice instructions operating;
  • step S9 the terminal device ends the recognition operation of the driving scenario.
  • an artificial intelligence algorithm is used to determine whether it is currently a driving scenario, which improves the recognition accuracy of driving scenarios.
  • FIG. 7 is a schematic structural diagram of a computer system according to an embodiment of the present application.
  • the computer system may be a terminal device.
  • the computer system includes a communication module 710, a sensor 720, a user input module 730, an output module 740, a processor 750, an audio and video input module 760, a memory 770, and a power supply 780.
  • the computer system provided in this embodiment may further include an AI processor 790.
  • the communication module 710 may include at least one module that enables communication between the computer system and the communication system or other computer systems.
  • the communication module 710 may include one or more of a wired network interface, a broadcast receiving module, a mobile communication module, a wireless Internet module, a local area communication module, and a location (or positioning) information module.
  • the sensor 720 may sense the current state of the system, such as the open / closed state, position, whether there is contact with the user, direction, and acceleration / deceleration, and the sensor 720 may generate a sensing signal for controlling the operation of the system.
  • the sensor 720 includes one or more of an infrared image sensor, an audio collector, an acceleration sensor, a gyroscope, an ambient light sensor, a proximity light sensor, and a geomagnetic sensor.
  • the user input module 730 is used to receive input digital information, character information, or contact touch operation / contactless gestures, and receive signal input related to user settings and function control of the system.
  • the user input module 730 includes a touch panel and / or other input devices.
  • the output module 740 includes a display panel for displaying information input by the user, information provided to the user, various menu interfaces of the system, and the like.
  • the display panel may be configured in the form of a liquid crystal display (liquid crystal) (LCD) or an organic light-emitting diode (OLED).
  • the touch panel may cover the display panel to form a touch display screen.
  • the output module 740 may also include an audio output module, an alarm, and a haptic module.
  • the audio and video input module 760 is used to input audio signals or video signals.
  • the audio and video input module 760 may include a camera and a microphone.
  • the power supply 780 may receive external power and internal power under the control of the processor 750 and provide power required for the operation of various components of the system.
  • the processor 750 includes one or more processors.
  • the processor 750 is a main processor in the computer system.
  • the processor 750 may include a central processor and a graphics processor.
  • the central processor has multiple cores and belongs to a multi-core processor. The multiple cores can be integrated on the same chip, or they can be independent chips.
  • the memory 770 stores computer programs, which include an operating system program 772, an application program 771, and the like.
  • Typical operating systems such as Microsoft ’s Windows, Apple ’s MacOS, and other systems for desktops or notebooks, and Google ’s Android Systems such as systems for mobile terminals.
  • the method provided in the foregoing embodiment may be implemented by software, and may be regarded as a specific implementation of the operating system program 772.
  • the memory 770 may be one or more of the following types: flash memory, hard disk type memory, micro multimedia card memory, card memory (such as SD or XD memory), random access memory (random access memory) , RAM), static random access memory (static RAM, SRAM), read-only memory (read only memory, ROM), electrically erasable programmable read-only memory (electrically erasable programmable-read-only memory (EEPROM), programmable Read only memory (programmable ROM, PROM), rollback protected memory block (replay protected memory (RPMB), magnetic memory, magnetic disk or optical disk.
  • the storage 770 may also be a network storage device on the Internet, and the system may perform operations such as updating or reading the storage 770 on the Internet.
  • the processor 750 is used to read the computer program in the memory 770 and then execute the method defined by the computer program. For example, the processor 750 reads the operating system program 772 to run the operating system on the system and implement various functions of the operating system, or read Take one or more application programs 771 to run applications on the system.
  • the memory 770 also stores other data 773 in addition to computer programs.
  • the AI processor 790 is mounted on the processor 750 as a coprocessor, and is used to perform tasks assigned to it by the processor 750.
  • the AI processor 790 may be called by the scene recognition model to implement some of the complex algorithms involved in scene recognition. Specifically, the AI algorithm of the scene recognition model runs on multiple cores of the processor 750, and then the processor 750 calls the AI processor 790, and the result realized by the AI processor 790 is returned to the processor 750.
  • connection relationship of the above modules is only an example, and the method provided in any embodiment of the present application may also be applied to terminal devices of other connection methods, for example, all modules are connected through a bus.
  • the processor 750 included in the terminal device further has the following functions:
  • Obtain data to be processed wherein the data to be processed is generated by data collected by a sensor, the sensor includes at least an infrared image sensor, and the data to be processed includes at least an image to be processed generated from image data collected by the infrared image sensor data;
  • the target scenario corresponding to the to-be-processed data is determined through a scenario recognition model, where the scenario recognition model is obtained by training the sensor data set and the scenario type set;
  • the processor 750 is specifically used to perform the following steps:
  • the target scenario corresponding to the data to be processed is determined by the AI algorithm in the scenario recognition model, where the AI algorithm includes a deep learning algorithm, and the AI algorithm runs in the AI processor 790.
  • the processor 750 is specifically used to perform the following steps:
  • the sensor further includes at least one of an audio collector and a first sub-sensor, and the to-be-processed data includes at least one of to-be-processed audio data and first to-be-processed sub-data, wherein The audio data collected by the collector is generated, and the first sub-data to be processed is generated by the first sub-sensor data collected by the first sub-sensor.
  • the processor 750 is specifically used to perform the following steps:
  • the processor 750 further includes at least one of an image signal processor, an audio signal processor, and the first sub-sensor processor,
  • the image signal processor is used to obtain image data through the infrared image sensor when the image acquisition preset running time is reached, wherein the image data is the data collected by the infrared image sensor;
  • the AI processor 790 is specifically configured to obtain the image data to be processed through the image signal processor, wherein the image data to be processed is generated by the image signal processor according to the image data;
  • the audio signal processor is used to obtain the audio data through the audio collector when the preset operation time of the audio acquisition is reached;
  • the AI processor 790 is specifically configured to obtain the to-be-processed audio data through the audio signal processor, wherein the to-be-processed audio data is generated by the audio signal processor according to the audio data;
  • the first sub-sensor processor is configured to acquire first sub-sensor data through the first sub-sensor when the first preset running time is reached, wherein the first sub-sensor data is data collected by the first sub-sensor ;
  • the coprocessor is specifically used to obtain the first to-be-processed sub-data through the first sub-sensor processor, wherein the first to-be-processed sub-data is generated by the first sub-sensor processor according to the first sub-sensor data.
  • the processor 750 is specifically used to perform the following steps:
  • the coprocessor is specifically used to determine that the business processing method is to activate the main image sensor of the terminal device and / or enable scanning support in the terminal device if the target scenario is a QR code scanning scenario QR code function application.
  • the processor 750 is specifically used to perform the following steps:
  • the coprocessor is specifically used to determine that the business processing method is to activate the mute mode of the terminal device and / or to activate the mute function of the application program in the terminal device and / or if the target scenario is a conference scenario
  • a mute mode icon is displayed on the screen standby normal display area of the terminal device, where the mute mode icon is used to activate the mute mode.
  • the processor 750 is specifically used to perform the following steps:
  • the coprocessor is specifically used to determine that the business processing method is to start the sport mode of the terminal device and / or start the sport mode function of the application in the terminal device according to the sport scenario if the target scenario is a sport scenario and / or Or a music playback icon is displayed on the screen standby normal display area of the terminal device, wherein the motion mode of the terminal device includes a step counting function, and the music playback icon is used to start or pause music playback.
  • the processor 750 is specifically used to perform the following steps:
  • the coprocessor is specifically used to determine that the business processing method is to start the driving mode of the terminal device and / or start the driving mode function of the application in the terminal device if the target scenario is a driving scenario and / or Or a driving mode icon is displayed on the screen standby normal display area of the terminal device, where the driving mode of the terminal device includes a navigation function and a voice assistant, and the driving mode icon is used to activate the driving mode.
  • FIG. 8 is a schematic structural diagram of an AI processor provided by an embodiment of the present application.
  • the AI processor 800 is connected to the main processor and external memory.
  • the core part of the AI processor 800 is an arithmetic circuit 803, and the arithmetic circuit 803 is controlled by the controller 804 to extract data in the memory and perform mathematical operations.
  • the arithmetic circuit 803 internally includes multiple processing engines (process engines, PE). In some implementations, the arithmetic circuit 803 is a two-dimensional pulsating array. The arithmetic circuit 803 may also be a one-dimensional pulsating array or other electronic circuit capable of performing mathematical operations such as multiplication and addition. In other implementations, the arithmetic circuit 803 is a general-purpose matrix processor.
  • the arithmetic circuit 803 takes the data corresponding to the matrix B from the weight memory 802 and caches it on each PE of the arithmetic circuit 803.
  • the operation circuit 803 takes matrix A data and matrix B from the input memory 801 to perform matrix operation, and the partial result or final result of the obtained matrix is stored in an accumulator 808.
  • the unified memory 806 is used to store input data and output data.
  • the weight data is directly transferred to the weight memory 802 through the storage unit access controller 805 (for example, direct memory access controller (DMAC)).
  • the input data is also transferred to the unified memory 806 through the storage unit access controller 805.
  • DMAC direct memory access controller
  • the bus interface unit 810 (bus interface unit, BIU) is used for the interaction between the AXI (advanced extended interface) bus and the storage unit access controller 805 and the instruction fetch memory 809 (instruction fetch buffer).
  • the bus interface unit 810 is used to fetch the instruction memory 809 to obtain instructions from the external memory, and also used by the storage unit access controller 805 to obtain the original data of the input matrix A or the weight matrix B from the external memory.
  • the storage unit access controller 805 is mainly used to carry the input data in the external memory to the unified memory 806 or the weight data to the weight memory 802 or the input data data to the input memory 801.
  • the vector calculation unit 807 usually includes a plurality of operation processing units. If necessary, the output of the operation circuit 803 is further processed, such as vector multiplication, vector addition, exponential operation, logarithm operation, and / or size comparison, etc.
  • the vector calculation unit 807 can store the processed vector in the unified memory 806.
  • the vector calculation unit 807 may apply a non-linear function to the output of the arithmetic circuit 803, such as a vector of accumulated values, to generate an activation value.
  • the vector calculation unit 807 generates normalized values, merged values, or both.
  • the processed vector can be used as the activation input of the arithmetic circuit 803.
  • the fetch memory 809 connected to the controller 804 is used to store instructions used by the controller 804.
  • the unified memory 806, the input memory 801, the weight memory 802, and the fetch memory 809 are all On-Chip memories.
  • the external memory in the figure is independent of the AI processor hardware architecture.
  • FIG. 9 is a schematic diagram of an embodiment of the service processing device in the embodiment of the present application. include:
  • the obtaining unit 901 is configured to obtain data to be processed, wherein the data to be processed is generated from data collected by a sensor, the sensor includes at least an infrared image sensor, and the data to be processed includes at least an image collected by the infrared image sensor Image data to be processed generated by the data;
  • the determining unit 902 is configured to determine a target scenario corresponding to the data to be processed through a scenario recognition model, where the scenario recognition model is obtained by training the sensor data set and the scenario type set;
  • the determining unit 902 is also used to determine the business processing mode according to the target scenario.
  • the acquiring unit 901 is configured to acquire data to be processed, wherein the data to be processed is generated from data collected by a sensor, the sensor includes at least an infrared image sensor, and the data to be processed includes at least the infrared image To-be-processed image data generated from the image data collected by the sensor; a determining unit 902, configured to determine a target scenario corresponding to the to-be-processed data through a scenario recognition model, where the scenario recognition model is training for the sensor data set and the scenario type set Obtained; the determination unit 902 is also used to determine the business processing mode according to the target scenario.
  • the terminal device collects data through a sensor deployed inside the terminal device or connected to the terminal device, the sensor includes at least an infrared image sensor, and generates data to be processed according to the collected data, and the data to be processed includes at least Image data to be processed generated from the image data collected by the infrared image sensor.
  • the terminal device obtains the data to be processed, it can determine the target scenario corresponding to the data to be processed through the scenario recognition model.
  • the scenario recognition model is obtained by training the data collection collected by the sensor and the scenario type set corresponding to different data offline. The next training is to use deep learning framework for model design and training.
  • the terminal device determines the current target scenario, it can determine the corresponding business processing method according to the target scenario.
  • the target scenario of the current terminal device can be determined, and the corresponding business processing method can be determined according to the target scenario. To improve user convenience.
  • the determining unit 902 is specifically configured to determine the target scenario corresponding to the data to be processed through the AI algorithm in the scenario recognition model, where the AI algorithm includes a deep learning algorithm, and the AI algorithm runs in the AI processor.
  • the terminal device specifically uses the AI algorithm in the context recognition model to determine the target scenario corresponding to the data to be processed.
  • the AI algorithm includes a deep learning algorithm, which runs on the AI processor in the terminal device.
  • the processor has powerful parallel computing capabilities and high efficiency when running AI algorithms. Therefore, the scene recognition model uses the AI algorithm to determine the specific target scenario.
  • the AI algorithm runs on the AI processor in the terminal device, which improves the scene recognition. The efficiency further improves the user's convenience.
  • the sensor further includes at least one of an audio collector and a first sub-sensor, and the to-be-processed data includes at least one of to-be-processed audio data and first to-be-processed sub-data, wherein the to-be-processed audio data is composed of the audio
  • the audio data collected by the collector is generated, and the first sub-data to be processed is generated by the first sub-sensor data collected by the first sub-sensor.
  • the sensor deployed in the terminal device also includes one of an audio collector and a first sub-sensor.
  • the first sub-sensor may be an acceleration sensor, a gyroscope, or an ambient light sensor. , One or more sensors such as proximity light sensor and geomagnetic sensor.
  • the audio collector collects audio data, and then processes the terminal device to generate audio data to be processed.
  • the first sub-sensor data is collected by the first sub-sensor, and processed by the terminal device to generate first sub-sensor data to be processed.
  • the terminal equipment uses multiple sensors to collect data in multiple dimensions, which improves the accuracy of scene recognition.
  • the acquisition unit 901 is specifically configured to acquire image data through the infrared image sensor when the preset operation time of image acquisition is reached, wherein the image data is data collected by the infrared image sensor;
  • the acquiring unit 901 is specifically configured to acquire the image data to be processed through an image signal processor, wherein the image data to be processed is generated by the image signal processor according to the image data;
  • the acquiring unit 901 is specifically configured to acquire the audio data through the audio collector when the preset time for audio collection is reached;
  • the acquiring unit 901 is specifically configured to acquire the to-be-processed audio data through an audio signal processor, wherein the to-be-processed audio data is generated by the audio signal processor according to the audio data;
  • the acquiring unit 901 is specifically configured to acquire first sub-sensor data through the first sub-sensor when the first preset running time is reached, wherein the first sub-sensor data is acquired by the first sub-sensor The data;
  • the acquiring unit 901 is specifically configured to acquire the first to-be-processed sub-data through the first sub-sensor processor, wherein the first to-be-processed sub-data is generated by the first sub-sensor processor according to the first sub-sensor data.
  • one or more of the infrared image sensor, the audio collector, and the first sub-sensor can respectively collect data corresponding to the sensor after reaching their respective preset runtimes, and obtain original data
  • the terminal device uses the processor corresponding to the sensor to process the original sensor data to generate the sensor data to be processed.
  • the sensor is started to collect data regularly, and the collected raw data can be processed by the processor corresponding to the sensor.
  • the cache space occupied by the scene recognition model is reduced, the power consumption of the scene recognition model is reduced, and the standby time of the terminal device is improved.
  • the determining unit 902 is specifically configured to determine that the target processing scenario is a scanning QR code scenario, and the determining unit 902 determines that the service processing mode is to activate the main image sensor of the terminal device according to the scanning QR code scenario. / Or start an application program in the terminal device that supports the function of scanning a QR code.
  • the terminal device determines that the target scenario corresponding to the data collected by the sensor is the scan QR code scenario according to the data collected by one or more sensors in the terminal device, and determines and scans the QR code.
  • Corresponding business processing methods include starting the main image sensor in the terminal device, the terminal device can use the main image sensor to scan the QR code, and the terminal device can also start an application that supports the function of scanning the QR code, such as starting the application WeChat and Turn on the QR code scanning function in WeChat. You can start the main image sensor and the application that supports the QR code scanning function at the same time, or you can start the main image sensor or the application that supports the QR code scanning according to a preset command or receive a user's instruction, which is not limited here.
  • the terminal device uses the data collected by the multi-dimensional sensor, and determines the target scenario as the scanning QR code scenario through the scenario recognition model. It can automatically execute related business processing methods, which improves the intelligence of the terminal device and enhances the user's convenient operation. Sex.
  • the determining unit 902 is specifically configured to, if the determining unit 902 determines that the target scenario is a conference scenario, the determining unit 902 determines that the service processing mode is to activate the mute mode of the terminal device and / or activate the terminal device according to the conference scenario.
  • the terminal device when the terminal device determines that the target scenario corresponding to the data collected by the sensor is the conference scenario according to the data collected by one or more sensors in the terminal device, the business processing method corresponding to the conference scenario is determined , Including the silent mode for starting the terminal device.
  • the terminal device When the terminal device is in the silent mode, all applications running in the terminal device are in the silent state.
  • the terminal device can also start the silent function of the applications running in the terminal device, such as starting the application program.
  • the mute function of WeChat At this time, the prompt sound of WeChat is switched to mute, and the mute mode icon can also be displayed on the terminal standby display area of the terminal device.
  • the terminal device can receive the user's mute operation instruction through the mute mode icon.
  • the terminal device responds to this The mute operation instruction starts the mute mode.
  • the terminal device uses the data collected by the multi-dimensional sensors, and determines the target scenario as the conference scenario through the scenario recognition model. It can automatically execute related business processing methods, which improves the intelligence of the terminal device and enhances the user's convenience of operation.
  • the determining unit 902 is specifically configured to, if the determining unit 902 determines that the target scenario is a sports scenario, the determining unit 902 determines that the business processing mode is to start the motion mode of the terminal device and / or to start the terminal device according to the sports scenario
  • the terminal device determines that the target scenario corresponding to the data collected by the sensor is a sports scenario based on the data collected by one or more sensors in the terminal device
  • the business processing method corresponding to the sports scenario is determined It includes the motion mode for starting the terminal device.
  • the terminal device starts the pedometer application and the physiological data monitoring application.
  • the terminal device can also start the motion mode function of the application program in the terminal device, for example, the motion function of the application NetEase Cloud Music.
  • the playback mode of NetEase Cloud Music is the sports mode, and it can also be displayed in the standby display area of the screen of the terminal device
  • the music playing icon the terminal device can receive the user's music playing instruction through the music playing icon, and the terminal device starts playing or pauses playing music in response to the music playing instruction.
  • the terminal equipment uses the data collected by the multi-dimensional sensors to determine the target scene as a sports scene through the scene recognition model, and can automatically execute the relevant business processing methods, which improves the intelligence of the terminal equipment and enhances the user's convenience of operation.
  • the determining unit 902 is specifically configured to determine that the business processing method is to start the driving mode of the terminal device and / or start the terminal device according to the driving scenario if the determining unit 902 determines that the target scenario is a driving scenario
  • the terminal device determines, according to the data collected by one or more sensors in the terminal device, that the target scenario corresponding to the data collected by the sensor is the driving scenario, and determines the business processing method corresponding to the driving scenario It includes the driving mode for starting the terminal device.
  • the terminal device starts the voice assistant.
  • the terminal device can perform related operations according to the voice instructions input by the user.
  • the terminal device can also start the navigation function.
  • the terminal device can also start the driving mode function of the application program in the terminal device, for example, the driving function of the application Gaode map.
  • the navigation mode of NetEase Cloud Music is the driving mode, and it can also be displayed in the screen standby normal display area of the terminal device
  • the driving mode icon the terminal device can receive the user's driving mode instruction through the driving mode icon, and the terminal device starts the driving mode in response to the driving mode instruction.
  • the terminal device uses the data collected by the multi-dimensional sensors, and after determining the target scenario as the driving scenario through the scenario recognition model, it can automatically execute related business processing methods, which improves the intelligence of the terminal device and improves the user's convenience of operation.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the technical solution of the present application essentially or part of the contribution to the existing technology or all or part of the technical solution can be embodied in the form of a software product, the computer software product is stored in a storage medium , Including several instructions to enable a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Electromagnetism (AREA)
  • Toxicology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • User Interface Of Digital Computer (AREA)
  • Library & Information Science (AREA)
  • Telephone Function (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)
  • Electrical Discharge Machining, Electrochemical Machining, And Combined Machining (AREA)
  • Hardware Redundancy (AREA)

Abstract

本申请公开了一种业务处理的方法,应用于终端设备,通过该终端设备上配置的传感器获取图像数据,并根据图像数据自动匹配当前的情景,然后自动运行当前情景对应的处理方式。比如采集到二维码(或还包括"支付"相关的文字),因此识别到当前是支付场景,然后自动打开支付软件。本申请还提供一种终端设备和业务处理装置。通过以上方法或装置,能够简化用户的操作步骤,提升操作的智能程度。

Description

一种业务处理的方法以及相关装置
本申请要求于2018年11月21日提交中国专利局、申请号为201811392818.7、发明名称为“一种业务处理的方法以及相关装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能领域,尤其涉及一种业务处理的方法以及相关装置。
背景技术
随着科技的发展,以智能手机为代表的终端设备在人们生活中的占比越来越高。以智能手机为例,在日常生活中,人们可以使用智能手机扫描携带二维码的图片以实现相关应用程序的功能或是获取信息。
目前,智能手机处于灭屏状态时,如果需要执行扫描二维码的行动,首先需要点亮屏幕,完成对智能手机的解锁后,需要对相关的应用程序进行操作以完成对二维码的扫描。
然而,上述智能手机对携带二维码的图片进行扫描的行动,存在操作繁琐以及智能程度低等缺陷,降低了用户的使用便利度。
发明内容
本申请实施例提供了一种业务处理的方法以及相关装置,应用于终端设备中,终端设备可以通过终端设备中的传感器获取待处理数据,终端设备中的情景识别模型根据该待处理数据确定当前的情景并根据当前的情景确定对应的业务处理方式,由于业务处理方式是终端设备中预设的处理业务的方式,因此可以简化用户的操作步骤,提升操作的智能程度,提升用户的使用便利度。
为解决上述技术问题,本申请实施例提供以下技术方案:
第一方面,本申请实施例提供了一种业务处理的方法,应用于终端设备,包括:获取待处理数据,其中,该待处理数据由传感器采集得到的数据生成,该传感器中至少包含红外线图像传感器,该待处理数据中至少包含由该红外线图像传感器采集得到的图像数据生成的待处理图像数据;通过情景识别模型确定该待处理数据所对应的目标情景,其中,该情景识别模型为传感数据集合以及情景类型集合训练得到的;根据该目标情景确定业务处理方式。
本申请中,终端设备通过部署于终端设备内部或与终端设备相连的传感器采集数据,该传感器中至少包括有红外线图像传感器,根据采集得到的数据生成待处理数据,待处理数据中至少包含由红外线图像传感器采集得到的图像数据生成的待处理图像数据。终端设备获取待处理数据后,可以通过情景识别模型确定这些待处理数据所对应的目标情景,该情景识别模型为使用传感器采集得到的数据集合与对应不同数据的情景类型集合在线下训练得到,线下训练为使用深度学习框架进行模型设计与训练。当终端设备确定当前的目标 情景后,可根据该目标情景确定对应的业务处理方式。通过使用传感器采集得到的数据与情景识别模型可确定当前终端设备所处的目标情景,并根据目标情景确定对应的业务处理方式,终端设备无需额外操作即可自动确定对应与目标情景的业务处理方式,提升用户的使用便利度。以上红外线图像传感器是常开(always on)的。随着技术的发展,本申请中的图像传感器也可以不是红外线传感器,只要可以采集图像即可,只是目前已知的传感器中红外线传感器的功耗较低。
在第一方面的一种可能的实现方式中,通过情景识别模型确定待处理数据所对应的目标情景,包括:通过该情景识别模型中的AI算法确定该待处理数据所对应的该目标情景,其中,该AI算法包含深度学习算法,该AI算法运行于AI处理器中。
本申请中,终端设备具体使用情境识别模型中的AI算法确定待处理数据所对应的目标情景,AI算法中包含有深度学习算法,在运行于终端设备中的AI处理器上,由于AI处理器具有强大的并行运算能力,运行AI算法时具有效率高的特点,因此情景识别模型使用AI算法确定具体的目标情景,AI算法运行于终端设备中的AI处理器上,提升了情景识别的效率,进一步提升了用户的使用便利度。
在第一方面的一种可能的实现方式中,该传感器中至少还包含音频采集器以及第一子传感器中的一个,该待处理数据中至少包含待处理音频数据以及第一待处理子数据中的一个,其中,该待处理音频数据由该音频采集器采集得到的音频数据生成,该第一待处理子数据由该第一子传感器采集得到的第一子传感器数据生成。
本申请中,部署于终端设备中的传感器,除了红外线图像传感器以外,还包含有音频采集器以及第一子传感器中的一个,第一子传感器可以为加速度传感器、陀螺仪、环境光传感器、接近光传感器以及地磁传感器等传感器中的一种或多种。音频采集器采集得到音频数据,经终端设备处理后生成待处理音频数据。第一子传感器采集得到第一子传感器数据,经终端设备处理后生成待处理第一子传感器数据。终端设备使用多种传感器,多维度采集数据,提升了情景识别的准确性。
在第一方面的一种可能的实现方式中,获取待处理数据,包括:当到达图像采集预设运行时间时,通过该红外线图像传感器获取图像数据,其中该图像数据为该红外线图像传感器采集得到的数据;通过图像信号处理器获取该待处理图像数据,其中,该待处理图像数据由该图像信号处理器根据该图像数据生成;和/或当到达音频采集预设运行时间时,通过该音频采集器获取该音频数据;通过音频信号处理器获取该待处理音频数据,其中,该待处理音频数据由该音频信号处理器根据该音频数据生成;和/或当到达第一预设运行时间时,通过该第一子传感器获取第一子传感器数据,其中该第一子传感器数据为该第一子传感器采集得到的数据;通过第一子传感器处理器获取该第一待处理子数据,其中,该第一待处理子数据由该第一子传感器处理器根据该第一子传感器数据生成。
本申请中,红外线图像传感器、音频采集器以及第一子传感器中的一种或多种,分别可以在达到各自的预设的运行时间之后,采集与传感器相对应的数据,采集得到原始的传感器数据后,终端设备使用与传感器对应的处理器处理原始的传感器数据生成待处理的传感器数据。通过设置预设的运行时间,定时开启传感器采集数据,采集得到的原始数据可以经过与传感器对应的处理器处理。降低了情景识别模型所占用的缓存空间,降低了情景 识别模型的功耗,提升了终端设备的待机使用时长。
在第一方面的一种可能的实现方式中,根据目标情景确定业务处理方式,包括:若该目标情景为扫描二维码情景,则根据该扫描二维码情景确定该业务处理方式为启动该终端设备主图像传感器和/或启动该终端设备中支持扫描二维码功能的应用程序。
本申请中,终端设备根据终端设备中的一种或多种传感器采集得到的数据,确定与传感器采集得到的数据对应的目标情景为扫描二维码情景时,确定与扫描二维码所对应的业务处理方式,包括有启动终端设备中主图像传感器,终端设备可以使用该主图像传感器扫描二维码,终端设备还可以启动支持扫描二维码功能的应用程序,例如启动应用程序微信并打开微信中的扫描二维码功能。可以同时启动主图像传感器以及启动支持扫描二维码功能的应用程序,也可以根据预设的指令或接收用户的指令启动主图像传感器或启动支持扫描二维码的应用程序,此处不作限定。除了扫描二维码以外,还可以应用于扫描条形码等其它图形标识,此处不作限定。终端设备使用多维度传感器采集得到的数据,通过情景识别模型确定目标情景为扫描二维码情景后,可自动执行相关的业务处理方式,提升了终端设备的智能化程度,提升了用户的操作便捷性。
在第一方面的一种可能的实现方式中,根据目标情景确定业务处理方式,包括:若该目标情景为会议情景,则根据该会议情景确定该业务处理方式为启动该终端设备的静音模式和/或启动该终端设备中应用程序的静音功能和/或在该终端设备的屏幕待机常显区显示静音模式图标,其中该静音模式图标用于启动该的静音模式。
本申请中,终端设备根据终端设备中的一种或多种传感器采集得到的数据,确定与传感器采集得到的数据对应的目标情景为会议情景时,确定与会议情景所对应的业务处理方式,包括有启动终端设备的静音模式,终端设备处于静音模式时,运行于终端设备中的所有应用程序处于静音状态,终端设备还可以启动运行于终端设备中应用程序的静音功能,例如启动应用程序微信的静音功能,此时微信的提示音切换为静音,还可以在终端设备的屏幕待机常显区显示静音模式图标,终端设备可以通过静音模式图标接收用户的静音操作指令,终端设备响应于该静音操作指令启动静音模式。终端设备使用多维度传感器采集得到的数据,通过情景识别模型确定目标情景为会议情景后,可自动执行相关的业务处理方式,提升了终端设备的智能化程度,提升了用户的操作便捷性。
在第一方面的一种可能的实现方式中,根据目标情景确定业务处理方式,包括:根据该目标情景确定该业务处理方式,包括:若该目标情景为运动情景,则根据该运动情景确定该业务处理方式为启动该终端设备的运动模式和/或启动该终端设备中应用程序的运动模式功能和/或在该终端设备的屏幕待机常显区显示音乐播放图标,其中,该终端设备的运动模式包括计步功能,该音乐播放图标用于开始播放或暂停播放音乐。
本申请中,终端设备根据终端设备中的一种或多种传感器采集得到的数据,确定与传感器采集得到的数据对应的目标情景为运动情景时,确定与运动情景所对应的业务处理方式,包括有启动终端设备的运动模式,终端设备处于运动模式时,终端设备启动计步应用程序以及生理数据监测应用程序,通过使用终端设备中相关传感器,记录用户的步数与相关生理数据。终端设备还可以启动终端设备中应用程序的运动模式功能,例如启动应用程序网易云音乐的运动功能,此时网易云音乐的播放模式为运动模式,还可以在终端设备的 屏幕待机常显区显示音乐播放图标,终端设备可以通过音乐播放图标接收用户的音乐播放指令,终端设备响应于该音乐播放指令开始播放或暂停播放音乐。终端设备使用多维度传感器采集得到的数据,通过情景识别模型确定目标情景为运动情景后,可自动执行相关的业务处理方式,提升了终端设备的智能化程度,提升了用户的操作便捷性。
在第一方面的一种可能的实现方式中,根据目标情景确定业务处理方式,包括:根据该目标情景确定该业务处理方式,包括:若该目标情景为驾驶情景,则根据该驾驶情景确定该业务处理方式为启动该终端设备的驾驶模式和/或启动该终端设备中应用程序的驾驶模式功能和/或在该终端设备的屏幕待机常显区显示驾驶模式图标,其中,该终端设备的驾驶模式包括导航功能以及语音助手,该驾驶模式图标用于启动该驾驶模式。
本申请中,终端设备根据终端设备中的一种或多种传感器采集得到的数据,确定与传感器采集得到的数据对应的目标情景为驾驶情景时,确定与驾驶情景所对应的业务处理方式,包括有启动终端设备的驾驶模式,终端设备处于驾驶模式时,终端设备启动语音助手,终端设备可根据用户输入的语音指令执行相关操作,终端设备还可以启动导航功能。终端设备还可以启动终端设备中应用程序的驾驶模式功能,例如启动应用程序高德地图的驾驶功能,此时网易云音乐的导航模式为驾驶模式,还可以在终端设备的屏幕待机常显区显示驾驶模式图标,终端设备可以通过驾驶模式图标接收用户的驾驶模式指令,终端设备响应于该驾驶模式指令启动驾驶模式。终端设备使用多维度传感器采集得到的数据,通过情景识别模型确定目标情景为驾驶情景后,可自动执行相关的业务处理方式,提升了终端设备的智能化程度,提升了用户的操作便捷性。
第二方面,本申请实施例提供了一种终端设备,包括:传感器、处理器,该传感器中至少包含红外线图像传感器;该处理器,用于获取待处理数据,其中,该待处理数据由该传感器采集得到的数据生成,该待处理数据中至少包含由该红外线图像传感器采集得到的图像数据生成的待处理图像数据;该处理器,还用于通过情景识别模型确定该待处理数据所对应的目标情景,其中,该情景识别模型为该传感器获取的传感数据集合以及情景类型集合训练得到的;该处理器,还用于根据该目标情景确定业务处理方式。该处理器还用于执行如上述第一方面的业务处理的方法。
第三方面,本申请实施例提供了一种业务处理装置,该业务处理装置应用于终端设备,包括:获取单元,用于获取待处理数据,其中,该待处理数据由传感器采集得到的数据生成,该传感器中至少包含红外线图像传感器,该待处理数据中至少包含由该红外线图像传感器采集得到的图像数据生成的待处理图像数据;确定单元,用于通过情景识别模型确定该待处理数据所对应的目标情景,其中,该情景识别模型为传感数据集合以及情景类型集合训练得到的;该确定单元,还用于根据该目标情景确定业务处理方式。
在第三方面的一种可能的实现方式中,包括:该确定单元,具体用于通过该情景识别模型中的AI算法确定该待处理数据所对应的该目标情景,其中,该AI算法包含深度学习算法,该AI算法运行于AI处理器中。
在第三方面的一种可能的实现方式中,包括:该传感器中至少还包含音频采集器以及第一子传感器中的一个,该待处理数据中至少包含待处理音频数据以及第一待处理子数据中的一个,其中,该待处理音频数据由该音频采集器采集得到的音频数据生成,该第一待 处理子数据由该第一子传感器采集得到的第一子传感器数据生成。
在第三方面的一种可能的实现方式中,包括:该获取单元,具体用于当到达图像采集预设运行时间时,该获取单元通过该红外线图像传感器获取图像数据,其中该图像数据为该红外线图像传感器采集得到的数据;该获取单元,具体用于通过图像信号处理器获取该待处理图像数据,其中,该待处理图像数据由该图像信号处理器根据该图像数据生成;和/或该获取单元,具体用于当到达音频采集预设运行时间时,该获取单元通过该音频采集器获取该音频数据;该获取单元,具体用于通过音频信号处理器获取该待处理音频数据,其中,该待处理音频数据由该音频信号处理器根据该音频数据生成;和/或该获取单元,具体用于当到达第一预设运行时间时,该获取单元通过该第一子传感器获取第一子传感器数据,其中该第一子传感器数据为该第一子传感器采集得到的数据;该获取单元,具体用于通过第一子传感器处理器获取该第一待处理子数据,其中,该第一待处理子数据由该第一子传感器处理器根据该第一子传感器数据生成。
在第三方面的一种可能的实现方式中,包括:该确定单元,具体用于若该确定单元确定该目标情景为扫描二维码情景,则该确定单元根据该扫描二维码情景确定该业务处理方式为启动该终端设备主图像传感器和/或启动该终端设备中支持扫描二维码功能的应用程序。
在第三方面的一种可能的实现方式中,包括:该确定单元,具体用于若该确定单元确定该目标情景为会议情景,则该确定单元根据该会议情景确定该业务处理方式为启动该终端设备的静音模式和/或启动该终端设备中应用程序的静音功能和/或在该终端设备的屏幕待机常显区显示静音模式图标,其中该静音模式图标用于启动该的静音模式。
在第三方面的一种可能的实现方式中,包括:该确定单元,具体用于若该确定单元确定该目标情景为运动情景,则该确定单元根据该运动情景确定该业务处理方式为启动该终端设备的运动模式和/或启动该终端设备中应用程序的运动模式功能和/或在该终端设备的屏幕待机常显区显示音乐播放图标,其中,该终端设备的运动模式包括计步功能,该音乐播放图标用于开始播放或暂停播放音乐。
在第三方面的一种可能的实现方式中,包括:该确定单元,具体用于若该确定单元确定该目标情景为驾驶情景,则该确定单元根据该驾驶情景确定该业务处理方式为启动该终端设备的驾驶模式和/或启动该终端设备中应用程序的驾驶模式功能和/或在该终端设备的屏幕待机常显区显示驾驶模式图标,其中,该终端设备的驾驶模式包括导航功能以及语音助手,该驾驶模式图标用于启动该驾驶模式。
第五方面,本申请实施例提供了一种包含指令的计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行如上述第一方面的存储块处理的方法。
第六方面,本申请实施例提供了一种计算机可读存储介质,该计算机可读存储介质中存储有报文处理的指令,当其在计算机上运行时,使得计算机执行上述第一方面描述的存储块处理的方法。
第七方面,本申请提供了一种芯片系统,该芯片系统包括处理器,用于支持网络设备实现上述方面中所涉及的功能,例如,例如发送或处理上述方法中所涉及的数据和/或信息。在一种可能的设计中,所述芯片系统还包括存储器,所述存储器,用于保存网络设备必要 的程序指令和数据。该芯片系统,可以由芯片构成,也可以包括芯片和其他分立器件。
第八方面,本申请提供一种业务处理的方法,所述方法应用于终端设备,所述终端设备上配置有常开的图像传感器,所述方法包括:获取数据,其中,所述数据包括所述图像传感器采集到的图像数据;通过情景识别模型确定所述数据所对应的目标情景,其中,所述情景识别模型为传感数据集合以及情景类型集合训练得到的;根据所述目标情景确定业务处理方式。
第八方面的其他实现方式可参考前述第一方面的各种实现方式。在此不再赘述。
第九方面,本申请提供一种终端设备,所述终端设备上配置有常开的图像传感器,所述终端设备用于实现前述任意实现方式所述的方法。
另外,第二方面至第九方面任一种实现方式所带来的技术效果可参见第一方面实现方式所带来的技术效果,此处不再赘述。
从以上技术方案可以看出,本申请实施例具有以下优点:
通过上述方法,终端设备可以通过终端设备中的传感器获取待处理数据,终端设备中的情景识别模型根据该待处理数据确定当前的情景并根据当前的情景确定对应的业务处理方式,由于业务处理方式是终端设备中预设的处理业务的方式,因此可以简化用户的操作步骤,提升操作的智能程度,提升用户的使用便利度。例如终端设备具体为智能手机,当智能手机处于灭屏状态且需要扫描携带二维码的图片时,智能手机可以自动实现相关应用程序的功能或是获取信息,无需额外操作,提升用户的使用便利度。
附图说明
图1a为本申请实施例中一个系统架构示意图;
图1b为本申请实施例中另一个系统架构示意图;
图2为本申请实施例提供的业务处理的方法涉及的使用场景示意图;
图3为本申请实施例提供的一种业务处理的方法的实施例示意图;
图4为本申请实施例提供的一种应用程序智能启动的实施例示意图;
图5为本申请实施例提供的一种智能推荐服务的实施例示意图;
图6为本申请实施例中一种业务处理的方法的应用场景流程示意图;
图7为本申请实施例提供的一种计算机系统的结构示意图;
图8为本申请实施例提供的一种AI处理器的结构示意图;
图9为本申请实施例中业务处理装置的一个实施例示意图。
具体实施方式
本申请提供一种业务处理的方法以及相关装置,终端设备可以通过终端设备中的传感器获取待处理数据,终端设备中的情景识别模型根据该待处理数据确定当前的情景并根据当前的情景确定对应的业务处理方式,由于业务处理方式是终端设备中预设的处理业务的方式,因此可以简化用户的操作步骤,提升操作的智能程度,提升用户的使用便利度。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别相似的对象,而不必用于介绍特定的顺序或先后次序。应该理 解这样使用的数据在适当情况下可以互换,以便这里介绍的实施例能够以除了在这里图示或介绍的内容以外的顺序实施。此外,术语“包括”或“具有”及其任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
在介绍本实施例之前,首先介绍本实施例中可能出现的几个概念。应理解的是,以下的概念解释可能会因为本实施例的具体情况有所限制,但并不代表本申请仅能局限于该具体情况,以下概念的解释伴随不同实施例的具体情况可能也会存在差异。
为了方便理解本申请的各个实施例,首先介绍本申请中可能出现的几个概念。应理解的是,以下的概念解释可能会因为本申请的具体情况有所限制,但并不代表本申请仅能局限于该具体情况,以下概念的解释伴随不同实施例的具体情况可能也会存在差异。
1、处理器
终端设备上设有多种处理器(还可称之为核心或计算单元),这些核心构成处理器。本申请实施例的核心主要涉及异构核心,这些核心的类型包括但不限于以下几种:
1)中央处理器(central processing unit,CPU)是一块超大规模的集成电路,是一台计算机的运算核心(core)和控制核心(control unit)。它的功能主要是解释计算机指令以及处理计算机软件中的数据。
2)图形处理器(graphics processing unit,GPU),又称显示核心、视觉处理器、显示芯片,是一种专门在个人电脑、工作站、游戏机和一些移动终端设备(如平板电脑、智能手机等)上图像运算工作的微处理器。
3)数字信号处理器(digital signal process,DSP),DSP指能够实现数字信号处理技术的芯片。DSP芯片的内部采用程序和数据分开的哈佛结构,具有专门的硬件乘法器,广泛采用流水线操作,提供特殊的DSP指令,可以用来快速的实现各种数字信号处理算法。
3.1)图像信号处理器(image signal processor,ISP),ISP指能够实现图像信号处理计算的芯片,ISP是DSP芯片的一种,主要作用是对图像传感器输出的数据进行后期处理,主要功能有线性纠正、噪声去除、坏点校正、内插、白平衡以及自动曝光等。
3.2)音频信号处理器(audio signal processor,ASP),ASP指能够实现音频信号处理计算的芯片,ASP是DSP芯片的一种,主要作用是对音频采集器输出的数据进行后期处理,主要功能有声源定位、声源增强、回声消除以及噪音抑制技术等。
4)AI处理器(artificial intelligence,人工智能)
AI处理器又称人工智能处理器或AI加速器,为运行有人工智能算法的处理芯片,通常采用专用集成电路(application specific integrated circuits,ASIC)实现,还可采用现场可编程门阵列(field-programmable gate array,FPGA)实现,还可以采用GPU实现,此处不作限定,AI处理器采用脉动阵列(systolic array)结构,在这种阵列结构中,数据按预先确定的“流水”方式在阵列的处理单元间有节奏地“流动”。在数据流动的过程中,所有的处理单元同时并行地对流经它的数据进行处理,因而它可以达到很高的并行处理速度。
AI处理器具体可以是神经网络处理器(neural-network processing unit,NPU)、张 量处理器(tensor processing unit,TPU)、智能处理器(intelligence processing unit,IPU)以及GPU等。
4.1)神经网络处理器(neural-network processing unit,NPU),NPU在电路层模拟人类神经元和突触,并且用深度学习指令集直接处理大规模的神经元和突触,一条指令完成一组神经元的处理。相比于CPU中采取的存储与计算相分离的冯诺伊曼结构,NPU通过突触权重实现存储和计算一体化,从而大大提高了运行效率。
4.2)张量处理器(tensor processing unit,TPU)人工智能旨在为机器赋予人的智能,机器学习是实现人工智能的强有力方法。所谓机器学习,即研究如何让计算机自动学习的学科。TPU就是这样一款专用于机器学习的芯片,它可以是一个针对Tensorflow平台的可编程人工智能加速器,本质是脉动阵列结构的加速器。其内部的指令集在Tensorflow程序变化或者更新算法时也可以运行。TPU可以提供高吞吐量的低精度计算,用于模型的前向运算而不是模型训练,且能效(TOPS/w)更高。TPU也可以称之为智能处理器(intelligence processing unit,IPU)。
2、传感器
终端设备上设有多种传感器(sensor),终端设备通过这些传感器获取外界信息。本申请实施例涉及的传感器包括但不限于以下几种:
1)红外线图像传感器(infrared radiation-red green blue image sensor,IR-RGB image sensor),采用CCD单元(charge-coupled device,电荷耦合器件)或标准CMOS单元(complementary meta-oxide semiconductor,互补金属氧化物半导体),通过滤波片滤波,只允许透过彩色波长段和设定的红外波长段的光,在图像信号处理器中分离IR(infrared radiation,红外)图像数据流以及RGB(red green blue,三原色)图像数据流,IR图像数据流为微光环境下得到的图像数据流,分离得到的该两个图像数据流用做其他应用处理。
2)加速度传感器(acceleration sensor),加速度传感器用于测量物体的加速度变化值,通常从X、Y以及Z三个方向进行测算,X方向值的大小代表终端设备水平方向运动,Y方向值的大小代表终端设备垂直方向移动,Z方向值的大小代表终端设备的空间垂直方向运动。在实际场景中,用于测量终端设备的运动速度和方向,例如:当用户拿着终端设备运动时,会出现上下摆动的情况,这样可以检测出加速度在某个方向上来回改变,通过检测这个来回改变的次数,可以计算出步数。
3)陀螺仪(gyroscope),陀螺仪是一种测量一个物体围绕某个中心旋转轴的角速度的传感器,应用于终端设备中的陀螺仪为微机械陀螺仪芯片(micro-electro-mechanical-systems gyroscope,MEMS gyroscope),常见的MEMS陀螺仪芯片为三轴陀螺仪芯片,可追踪6个方向的位移变化。三轴陀螺仪芯片可以获取终端设备在x、y、z三个方向上的角加速度的变化值,用于检测终端设备的旋转方。
4)环境光传感器(ambient light sensor),环境光传感器是一种测量外界光线变化的传感器,基于光电效应测量外界光线强度的变化。应用于终端设备中,用于用来调节终端设备的显示屏的亮度。而因为显示屏通常是终端设备最耗电的部分,因此运用环境光传感器来协助调整荧幕亮度,能进一步达到延长电池寿命的作用。
5)接近光传感器(proximity sensor),接近光传感器由一个红外线发射灯和红外辐射光线探测器构成。接近光传感器位于终端设备的听筒附近,终端设备靠近耳朵时,系统借助接近光传感器知道用户在通电话,然后会关闭显示屏,防止用户因误操作影响通话。接近光传感器的工作原理是,红外线发射灯发出的不可见红外光由附近的物体反射后,被红外辐射光线探测器探测到。一般发出的不可见红外光采取近红外光谱波段。
6)地磁传感器(magnetism sensor),地磁传感器是一类利用被测物体在地磁场中的运动状态不同,由于地磁场在不同方向上的磁通量分布是不同的,因此可通过感应地磁场的分布变化而指示被测物体的姿态和运动角度等信息的测量装置。一般用于终端设备的指南针或导航应用中,通过计算出终端设备在三维空间中的具体朝向,帮助用户实现准确定位。
3、情景识别,又称情景感知(context awareness)源于所谓普适计算的研究,最早由Schilit于1994年提出。情景感知有诸多定义,简单说就是通过传感器及其相关的技术使计算机设备能够“感知”到当前的情景。能够进行情景感知的信息亦有很多,如温度、位置、加速度、音频、视频等等。
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例进行介绍。
本申请实施例提供的业务处理的方法可以应用于终端设备,该终端设备可以说移动电话、平板电脑(tablet personal computer)、膝上型电脑(laptop computer)、数码相机、个人数字助理(personal digital assistant,简称PDA)、导航装置、移动上网装置(mobile internet device,MID)、可穿戴式设备(wearable device)、智能手表以及智能手环等。当然,在以下实施例中,对该终端设备的具体形式不作任何限制。其中,终端设备可以搭载的系统可以包括
Figure PCTCN2019086127-appb-000001
或者其它操作系统等,本申请实施例对此不作任何限制。
以搭载
Figure PCTCN2019086127-appb-000002
操作系统的终端设备为例,如图1a所示,图1a为本申请实施例中一个系统架构示意图,终端设备从逻辑上可划分为硬件层、操作系统,以及应用层。硬件层包括主处理器、微控制器单元、调制调解器、Wi-Fi模块、传感器、定位模块等硬件资源。应用层包括一个或多个应用程序,比如应用程序,应用程序可以为社交类应用、电子商务类应用、浏览器、多媒体类应用以及导航应用等任意类型的应用程序,还可以为情景识别模型以及人工智能算法等应用程序。操作系统作为硬件层和应用层之间的软件中间件,是管理和控制硬件与软件资源的应用程序。
硬件层中,除主处理器、传感器、Wi-Fi模块等硬件资源以外,还包括有永远在线(always on,AO)区,永远在线区中的硬件通常情况下全天候开启,永远在线区中还包括传感器控制中心(sensor hub)、AI处理器以及传感器等硬件资源,sensor hub中包含有协处理器以及传感器处理器,传感器处理器用于处理传感器输出的数据,AI处理器以及传感器处理器生成的数据经过协处理器进一步处理后,由协处理器与主处理器建立交互联系。其中永远在线区中的传感器包括有:红外线图像传感器、陀螺仪、加速度传感器以及音频采集器(mic)等,传感器处理器包括有:迷你图像信号处理器(mini ISP)以及音频信号处理器(ASP)。为了便于理解,AO区与硬件层之间的连接关系如图1b所示,图1b为本申请实施例中另一个系统架构示意图。
在一个实施例中,操作系统包括内核,硬件抽象层(hardware abstraction layer,HAL)、库和运行时(libraries and runtime)以及框架(framework)。其中,内核用于提供底层系统组件和服务,例如:电源管理、内存管理、线程管理、硬件驱动程序等;硬件驱动程序包括Wi-Fi驱动、传感器驱动、定位模块驱动等。硬件抽象层是对内核驱动程序的封装,向框架提供接口,屏蔽低层的实现细节。硬件抽象层运行在用户空间,而内核驱动程序运行在内核空间。
库和运行时也叫做运行时库,它为可执行程序在运行时提供所需要的库文件和执行环境。库与运行时包括安卓运行时(android runtime,ART)以及库等。ART是能够把应用程序的字节码转换为机器码的虚拟机或虚拟机实例。库是为可执行程序在运行时提供支持的程序库,包括浏览器引擎(比如webkit)、脚本执行引擎(比如JavaScript引擎)、图形处理引擎等。
框架用于为应用层中的应用程序提供各种基础的公共组件和服务,比如窗口管理、位置管理等等。框架可以包括电话管理器,资源管理器,位置管理器等。
以上介绍的操作系统的各个组件的功能均可以由主处理器执行存储器中存储的程序来实现。
所属领域的技术人员可以理解终端可包括比图1a以及图1b所示的更少或更多的部件,图1a以及图1b所示的该终端设备仅包括与本申请实施例所公开的多个实现方式更加相关的部件。
如图2所示,图2为本申请实施例提供的业务处理的方法涉及的使用场景示意图。在该使用场景中,在终端设备上设有处理器,该处理器包括至少两个核心。该至少两个核心可以包括CPU以及AI处理器等。AI处理器包括但不限于神经网络处理器、张量处理器以及GPU等。这些芯片可以称之为核心,用于在终端设备上进行计算。其中,不同的核心有不同的能效比。
终端设备可以使用具体的算法执行不同的应用业务,本申请实施例的方法涉及运行情景识别模型,终端设备可以使用情景识别模型确定当前使用终端设备的用户所在的目标情景,并根据确定的目标情景执行不同的业务处理方式。
终端设备在确定当前使用终端设备的用户所在的目标情景时,会依据不同的传感器采集得到的数据以及情景识别模型中的AI算法,确定不同的目标情景。
为此,本申请实施例提供了一种业务处理的方法,本申请以下各实施例主要对终端设备根据不同的传感器采集得到的数据以及情景识别模型,确定终端设备所在的目标情景以及对应目标情景的业务处理方式。
下面以实施例的方式,对本申请技术方案做进一步的说明,请参阅图3,图3为本申请实施例提供的一种业务处理的方法的实施例示意图,本申请实施例提供的一种业务处理的方法的实施例包括:
301、启动定时器;
本实施例中,终端设备启动与传感器相连接的定时器,该定时器用于指示与其相连的传感器采集数据的时间间隔。AO区中的协处理器根据情景识别模型的要求,设置不同传感器对应的定时器的定时时间。例如:与加速度传感器对应的定时器,可设置定时时间为100 毫秒(millisecond,ms),含义为,每隔100ms采集一次加速度数据,并将该加速度数据存储至终端设备指定的缓存区域中。
这里的定时时间既可以根据情景识别模型的要求进行设置,也可以根据传感器寿命、缓存空间占用率以及功耗情况等多方面需求进行设置,例如:对于红外线图像传感器而言,红外线图像传感器本身可以采集较高帧频的红外图像,但是长时间的连续采集会对传感器本身造成损伤,影响寿命。同时,长时间的连续采集会导致红外线图像传感器耗电量增加,降低终端设备的使用时长。综合上述情况以及实际情景识别模型的需求,可设置与红外线图像传感器相连的定时器的定时时间为,例如:人脸识别情景下,可设置采集图像的定时时间为1/6秒,即每秒采集图像10帧;在其它识别情景下,可设置采集图像的定时时间为1秒,即每秒采集图像1帧。还可以为:当终端设备处于低电量模式时,定时时间设置为1秒,以达到延长终端设备使用时长的目的。对于一些功耗较低以及采集得到的数据占用存储空间较小的传感器,可不设置该传感器的定时时间,以达到实时采集数据的目的。
需要说明的是,定时器既可以为与传感器相连接的具有定时功能的芯片,也可以时传感器内置的定时功能,此处不作限定。
302、传感器采集得到数据;
本实施例中,定时器在到达定时时间后,指示相连接的传感器启动并采集数据。具体需要什么传感器采集数据,由协处理器根据情景识别模型选择。例如:当需要确定当前是否处于扫码二维码情景时,终端设备通过红外线图像传感器采集得到数据,在对该红外线图像传感器采集得到的数据进行处理以及运算后,即可完成情景识别的过程。当需要确定当前是否处于会议情景时,终端设备除了使用红外线图像传感器采集得到数据,还需要使用音频采集器采集得到数据,在对该红外线图像传感器采集得到的数据以及音频采集器采集得到的数据进行处理以及运算后,即可完成情景识别的过程。
以红外线图像传感器为例,在到达与红外线图像传感器对应的定时时间后,红外线图像传感器采集图像数据,该图像数据包含有IR图像和RGB图像,其中IR图像为灰度图,可用于展示低光环境下的所拍摄的外界信息,RGB图像为彩色图像,可用于展示非低光环境下所拍摄的外界信息,红外线图像传感器将采集得到的图像数据存储至缓存空间中,以供后续步骤使用。
获取红外线图像传感器采集得到的图像数据存在两种不同的应用情景:第一种应用情景是终端设备内与终端设备的主屏幕相同平面的壳体内布置有第一红外线图像传感器;第二种应用情景是终端设备内与终端设备的主图像传感器相同平面的壳体内布置有第二红外线图像传感器。下面分别对这两种情况进行介绍。
第一种应用情景中,第一红外线图像传感器可以采集投影至终端设备的主屏幕的图像数据,例如当用户使用终端设备进行自拍操作时,布置于与终端设备的主屏幕相同的平面内的第一红外线图像传感器可以采集用户的脸部图像数据。
第二种应用情景中,第二红外线图像传感器可以采集投影至终端设备的主图像传感器的图像数据,例如当用户使用终端设备的主图像传感器进行扫描二维码操作时,布置于终端设备的主图像传感器相同的平面内第二红外线图像传感器可以采集二维码图像数据。
需要说明的是,在同一终端设备中,可以同时布置第一红外线图像传感器与第二红外 线传感器,布置的方式与采集数据的方式与前述方式相似,此处不再赘述。
音频采集器可布置于终端设备的壳体上任意位置,一般以16千赫兹的采样频率采集终端设备所在环境的音频数据。
加速度传感器布置于终端设备内部的always on区中,采用两线式串行总线接口(inter-integrated circuit,I2C)或串行外设接口(serial peripheral interface,SPI)SPI和sensor hub相连,一般提供±2重力(gravity,G)至±16重力(gravity,G)的加速度测量范围,采集得到的加速度数据精度小于16比特(bit)。
需要说明的是,传感器采集得到的数据既可以直接发送至传感器处理器或情景识别模型进行处理,也可以存储至缓存区域,传感器处理器或情景识别模型通过读取缓存区域中的传感器数据进行处理,此处不作限定。
303、传感器处理器处理数据;
本实施例中,传感器采集得到的数据之后,该采集得到的数据可经由与传感器对应的传感器处理器,又称为与传感器对应的数字信号处理器,进行数据预处理后,生成供后续情景识别模型使用的待处理数据。
以与红外线图像传感器对应的传感器处理器miniISP为例,在获取红外线图像传感器采集得到的图像数据后,miniISP对该图像数据进行处理,例如,当传感器采集得到的图像数据的分辨率(image resolution)为640像素乘480像素时,miniISP可将该图像数据进行压缩处理,生成320像素乘240像素的待处理图像数据。miniISP还可以对图像数据进行自动曝光(automatic exposure,AE)处理。除了上述处理方式,miniISP还可用于根据图像数据中包含的亮度信息,自动选择图像数据中所需处理的图像,例如,当miniISP确定当前图像为低光环境下采集得到的,由于IR图像所包含的低光环境下的图像细节信息比RGB图像更多,因此选择图像数据中的IR图像进行处理。
可以理解的是,并不是所有的传感器都需要传感器处理器处理数据,例如,加速度传感器采集得到的加速度数据,情景识别模型可以直接使用。步骤303为可选步骤。
304、确定目标情景;
本实施例中,终端设备使用传感器采集得到的数据和/或经过传感器处理器处理后的待处理数据,根据情景识别模型确定对应的目标情景。情景识别模型运行于协处理器和AI处理器中,情景识别模型中的AI算法运行于AI处理器中。对于不同的传感器采集得到的的数据,在情景识别模型中数据流动的方向与顺序是不同的,例如:对于miniISP根据图像数据处理生成的待处理图像数据以及ASP根据音频数据生成的待处理音频数据,先加载至情景识别模型中AI处理器上运行的AI算法,后协处理器根据AI处理器的计算结果确定目标情景。对于加速度传感器采集生成的加速度数据,先经过协处理器的处理,后加载至情景识别模型中AI处理器上运行的AI算法,最后协处理器根据AI处理器的计算结果确定目标情景。
其中,情景识别模型包含两部分:第一部分是AI算法,AI算法中包含有根据传感器采集得到的数据集合以及经过传感器处理器处理后的待处理数据集合,在线下训练得到神经网络模型。第二部分是根据AI算法运算的结果,确定目标情景,由协处理器完成。对于图像数据,通常采用的是卷积神经网络(convolutional neural network,CNN),对于音 频数据,通常采用的是深度神经网络(deep neural network,DNN)/循环神经网络(recurrent neural network,RNN)/长短期记忆网络(long short-term memory,LSTM),对于不同的数据可以采用不同的深度神经网络算法,对具体的算法类型不作限制。
CNN是一种前馈神经网络,它的人工神经元可以响应一部分覆盖范围内的周围单元,对于大型图像处理有出色表现。该CNN由一个或多个卷积层和顶端的全连通层(对应经典的神经网络)组成,同时也包括关联权重和池化层(pooling layer)。这一结构使得CNN能够利用输入数据的二维结构。该CNN中的卷积层的卷积核会对图像进行卷积,卷积就是用一个特定参数的滤波器去扫描图像,提取图像的特征值。
线下训练是指在tensorflow、caffe(convolutional architecture for fast feature embedding)等深度学习框架上进行模型设计与训练。
以红外线图像传感器为例。终端设备中可应用红外线图像数据的情景识别模型有多种,例如:扫描二维码情景识别模型、被扫码情景识别模型以及自拍情景识别模型等,终端设备中可应用一种或多种的情景识别模型,下面分别进行介绍。
在扫描二维码情景识别模型中,加载于AI处理器中的线下训练所得到的神经网络模型,采用CNN算法,通过传感器采集10万张二维码图像和10万张非二维码图像,并分别标注(有二维码或无二维码),在tensorflow上训练之后得到神经网络模型及相关参数,之后将第二红外线图像传感器采集得到的图像数据,输入至该神经网络模型中进行网络推导,就可得到该图像中是不是包含有二维码的结果。需要说明的时,扫描二维码情景识别模型中,线下训练是若采集的图像不仅为二维码图像,还可以是条形码图像等其他图标时,扫描二维码情景识别模型还可以识别终端设备获取得到的图像中,是否包含有条形码等结果。
在被扫码情景识别模型中,加载于AI处理器中的线下训练所得到的神经网络模型,采用CNN算法,通过传感器采集10万张包含扫码设备的图像和10万张不含扫码设备的图像,该包含扫码设备的图像中为传感器采集得到的包含有扫码仪、扫码枪、智能手机以及智能手环等可穿戴设备中扫描部分的图像数据,以智能手机为例,当图像中包含有智能手机中主图像传感器部分时,该图像为包含扫码设备的图像。并分别标注(有扫码设备或无扫码设备),在tensorflow上训练之后得到神经网络模型及相关参数,之后将第一红外线图像传感器采集得到的图像数据,输入至该神经网络模型中进行网络推导,就可得到该图像中是不是包含有扫码设备的结果。
在自拍情景识别模型中,载于AI处理器中的线下训练所得到的神经网络模型,采用CNN算法,通过传感器采集10万张包含人脸的图像和10万张不含人脸的图像,该包含人脸的图像为包含部分或全部的人体脸部的图像,并分别标注(有人脸或无人脸),在tensorflow上训练之后得到神经网络模型及相关参数,之后将第一红外线图像传感器采集得到的图像数据,输入至该神经网络模型中进行网络推导,就可得到该图像中是不是包含有人脸的结果。
需要说明的是,除了应用红外线图像传感器采集的图像数据确定目标情景以外,还可以应用音频采集器采集得到的音频数据、加速度传感器采集得到的加速度数据等多种传感器采集得到的数据确定目标情景。例如,可以应用图像数据、音频数据以及加速度数据确定当前终端设备所处的情景是否为运动情景。应用多种数据确定当前终端设备所处的情景 是否为驾驶情景等。
不对具体采用的算法、线下训练所具体使用深度学习平台以及线下训练时传感器采集的数据样本量做具体限定。
305、确定业务处理方法。
本实施例中,协处理器在确定目标情景之后,可以由协处理器确定与目标情景对应的业务处理方法,也可以通过协处理器将确定的目标情景发送至主处理器中,由主处理器确定与目标情景对应的业务处理方法。
根据不同的情景,对应有多种不同的业务处理方法,例如:若目标情景为驾驶情景,则根据驾驶情景确定业务处理方式为启动终端设备的驾驶模式和/或启动终端设备中应用程序的驾驶模式功能和/或在终端设备的屏幕待机常显区显示驾驶模式图标,其中,终端的驾驶模式包括导航功能以及语音助手,驾驶模式图标用于启动驾驶模式。启动终端设备的驾驶模式以及启动终端设备中应用程序的驾驶模式功能,为主处理器执行的步骤,在终端设备的屏幕待机常显区显示驾驶模式图标,为协处理器执行的步骤。
本申请实施例中,提供了一种业务处理的方法,终端设备使用传统传感器、红外线图像传感器以及音频采集器等多种传感器,采集外界多维度信息,提升了终端设备的感知能力。由于AI处理器是针对AI算法进行优化的专用芯片,因此终端设备使用该AI处理器可以极大的提高AI算法的运行速度,并降低终端设备的功耗。由于协处理器运行于终端设备的always on区上,无需开启主处理器即可工作,因此在终端设备处于灭屏的状态下,依然可以进行情景识别。
接下来,将分别介绍在图3所对应的实施例基础上,终端设备在不同情境下,实现确定所在的目标情景以及对应目标情景的业务处理方式。
在图3所对应的实施例基础上,如图4所示,图4为本申请实施例提供的一种应用程序智能启动的实施例示意图,本申请实施例提供的一种应用程序智能启动的实施例包括:
401、启动定时器;
本实施例中,步骤401与图3中步骤301相似,此处不再赘述。
402、获取传感器采集得到的数据;
本实施例中,步骤402与图3中步骤302相似,此处不再赘述。
403、传感器处理器处理数据;
本实施例中,步骤403与图3中步骤303相似,此处不再赘述。
404、确定是否为目标情景;
本实施例中,根据传感器采集得到的数据确定是否为目标情景的方法与图3中步骤304中的方法相似,此处不再赘述。
若终端设备根据当前获取的数据,确定终端设备所处于的情景为目标情景,则进入步骤405;若终端设备根据当前获取的数据,确定终端设备所处于的情景不为目标情景,则进入步骤401,等待获取并处理下一次传感器采集得到的数据。
405、启动目标应用程序。
本实施例中,当终端设备根据传感器采集得到的数据,确定当前终端设备所处的目标情景后,终端设备可以启动与目标情景对应的目标应用程序。
例如,当终端设备确定当前情景为运动情景后,终端设备可以启动导航应用程序,如高德地图等,还可以启动健康监测应用程序用以监测终端设备使用者的生理数据,还可以启动音乐播放应用程序并自动播放音乐。
以红外线图像传感器为例,对应步骤404中的三种可应用红外线图像数据的情景识别模型,下面分别进行介绍。
终端设备根据计算结果,得到当前图像中包含有扫码设备时,终端设备可确定当前终端设备处于扫描二维码情景,此时终端设备可自动打开主图像传感器相关联的应用程序、启动主图像传感器并启动主屏幕,例如相机应用程序。或打开具有扫描二维码功能的应用程序并进一步打开该应用程序中扫码二维码的功能,例如打开浏览器应用程序中“扫一扫”功能,其中“扫一扫”功能用于扫描二维码图像并将扫描得到的数据提供至浏览器使用。
终端设备根据计算结果,得到当前图像中包含有扫码设备时,终端设备可确定当前终端设备处于被扫码情景,此时终端设备可打开具有二维码和/或条形码的应用程序,在自动打开终端设备的主屏幕后,在主屏幕上显示应用程序的二维码和/或条形码。例如,当终端设备确定当前图像中包含有扫码设备时,打开终端设备的主屏幕并显示支付应用程序的支付二维码和/或条形码,该支付应用程序可以为支付宝或微信等。
终端设备根据计算结果,得到当前图像中包含人脸时,终端设备可确定当前终端设备处于自拍情景,此时终端设备可启动与主屏幕同一平面内的副图像传感器,自动打开副图像传感器相关联的应用程序,例如相机应用程序中的自拍功能并启动主屏幕,在主屏幕上显示相机应用程序中自拍功能界面。
本申请实施例中,终端设备可以在使用红外线图像传感器的基础上,自动识别当前情景,并根据识别得到的情景,智能化地启动与目标情景所对应的应用程序。提升了用户的操作便利性。
在图3所对应的实施例基础上,如图5所示,图5为本申请实施例提供的一种智能推荐服务的实施例示意图,本申请实施例提供的一种智能推荐服务的实施例包括:
501、启动定时器;
本实施例中,步骤501与图3中步骤301相似,此处不再赘述。
502、获取传感器采集得到的数据;
本实施例中,步骤502与图3中步骤302相似,此处不再赘述。
503、传感器处理器处理数据;
本实施例中,步骤503与图3中步骤303相似,此处不再赘述。
504、确定是否为目标情景;
本实施例中,根据传感器采集得到的数据确定是否为目标情景的方法与图3中步骤304中的方法相似,此处不再赘述。
若终端设备根据当前获取的数据,确定终端设备所处于的情景为目标情景,则进入步骤505;若终端设备根据当前获取的数据,确定终端设备所处于的情景不为目标情景,则进入步骤501,等待获取并处理下一次传感器采集得到的数据。
505、推荐目标服务。
本实施例中,当终端设备根据传感器采集得到的数据,确定当前终端设备所处的目标 情景后,终端设备可以推荐与目标情景对应的目标服务。下面对推荐目标服务的具体方法进行介绍。
若终端设备确定终端设备所处的目标情景后,可向终端设备使用者推荐目标情景对应的目标服务,包括在终端设备的息屏提示(always on display,AOD)区显示目标服务的功能入口、在终端设备的AOD区显示目标服务中包含的应用程序的程序入口、自动启动目标服务以及自动启动目标服务中包含的应用程序。
例如:当终端设备根据红外线图像传感器、音频采集器以及加速度传感器等传感器集合采集得到的数据,确认当前情景为会议情景或睡眠情景等终端设备所处环境较为安静的情景时,终端设备可以在AOD区中显示静音图标,终端设备可以通过接收用户对静音图标的操作指令,启动静音功能,该静音功能为设置终端设备中所有应用程序的音量为0。在AOD区中显示静音图标以外,还可以同时在AOD区中显示震动图标,终端设备可以通过接收用户对震动图标的操作指令,启动震动功能,该震动功能为设置终端设备中所有应用程序的音量为0并且设置终端设备中所有应用程序的提示音为震动。当一段时间内,例如15分钟,终端设备未能接收到AOD区中对应图标的操作指令时,终端设备可自动启动静音功能或震动功能。
当终端设备确定当前情景为运动情景时,终端设备可以在AOD区中显示音乐播放应用程序图标,终端设备可以通过接收用户对音乐播放应用程序图标的操作指令,启动音乐播放应用程序。
本申请实施例中,终端设备在灭屏状态等低功耗状态下,可进行服务推荐,并且可用图像、音频以及加速度数据等多种传感器数据作为情景感知数据依据,通过深度学习算法提高情境感知的准确率。提升了用户的操作便利性。
在图3、图4以及图5对应的实施例基础上,如图6所示,图6为本申请实施例中一种业务处理的方法的应用场景流程示意图,本申请实施例中一种业务处理的方法的应用场景包括:
步骤S1中,当终端设备通过蓝牙连接上对端的设备时,用户可标记当前通过蓝牙连接的对端设备是否为汽车。对端设备被标记为汽车后,终端设备每次通过蓝牙与该对端设备连接时,终端设备可以确认当前蓝牙连接的对端设备为汽车。
终端设备AO区中的协处理器每隔一段时间,一般为10秒,获取终端设备的蓝牙连接状态;
步骤S2中、是否连接至汽车蓝牙
终端设备获取当前蓝牙连接状态后,可获知当前终端设备是否存在通过蓝牙连接的对端设备,若存在蓝牙连接的对端设备,则进一步确认当前蓝牙连接的对端设备是否带有用户设置的汽车标识,若对端设备带有用户设置的汽车标识,可确认当前终端设备连接至汽车蓝牙,进入步骤S8,若当前终端设备蓝牙状态处于未连接或蓝牙连接的对端设备没有用户设置的汽车标识,则进入步骤S3;
步骤S3中、终端设备获取运行于终端设备中打车软件的相关数据,并根据该打车软件的相关数据确认当前打车软件是否启动,即当前用户是否使用该打车软件。若根据打车软件的相关数据确认当前用户使用该打车软件时,则进入步骤S9,若根据打车软件的相关数 据确认当前用户未使用该打车软件时,则进入步骤S4;
步骤S4中、终端设备使用加速度传感器以及陀螺仪采集加速度数据以及角速度数据,并将采集得到的加速度数据以及角速度数据进行数据预处理,包括:对数据进行重采样,例如加速度传感器采集得到的原始加速度数据的采样率为100赫兹(hz),经过数据重采样后得到的加速度数据的采样率为1赫兹,具体重采样后得到数据的采样率由情景识别模型中应用的神经网络模型的样本采样率决定,一般与样本采样率一致。
将预处理后的数据存储至终端设备的随机存储器(random access memory,RAM)中,RAM包括有双倍速率同步动态随机存储器(double data rate,DDR)、DDR2、DDR3、DDR4以及未来即将面世的DDR5;
步骤S5中、终端设备中情景识别模型获取RAM中存储的预处理后的加速度数据以及角速度数据,情景识别模型根据预处理后的加速度数据以及角速度数据,确认当前终端设备是否处于驾驶情景。若是,则进入步骤S6,若否,则进入步骤S9;
步骤S6中、当终端设备根据加速度数据与角速度数据确认当前终端设备处于驾驶情景后,由于根据加速度数据与角速度数据进行的情景识别结果可信度不高,还需要进一步获取其它传感器数据进行情景识别。终端设备获取红外线图像传感器采集得到的图像数据以及音频采集器采集得到的音频数据,并将采集得到的图像数据与音频数据存储至终端设备的RAM中,或者采集得到的图像数据与音频数据经过miniISP与ASP对应处理后,将处理后的图像数据与音频数据存储至终端设备的RAM中;
步骤S7中、终端设备获取RAM中的图像数据与音频数据,并将图像数据与音频数据加载至情景识别模型进行情景识别,根据图像数据与音频数据确认当前终端设备是否处于驾驶情景。若是,则进入步骤S8,若否,则进入步骤S9;
步骤S8中、终端设备在AOD区中显示驾驶情境图标,该驾驶情景图标为终端设备的驾驶情景功能入口,当终端设备接收到用户通过该驾驶情景图标触发的操作指令后,终端设备启动驾驶情景模式,包括有:启动导航应用程序,放大终端设备显示字符的字号,启动语音操作助手,该语音操作助手可根据用户的语音指令控制终端设备的操作,例如根据用户的语音指令进行拨打电话号码的操作;
步骤S9中、终端设备结束驾驶情景的识别操作。
本方案中,结合终端设备中众多传感器,利用加速度、角速度、图像以及音频等各个维度数据,通过人工智能算法来判断当前是否是驾驶情景,提高了驾驶情景的识别准确率。
请参考图7,图7为本申请实施例提供的一种计算机系统的结构示意图。该计算机系统可以为终端设备。如图所示,该计算机系统包括通信模块710、传感器720、用户输入模块730、输出模块740、处理器750、音视频输入模块760、存储器770以及电源780。进一步的,本实施例提供的计算机系统还可以包括AI处理器790。
通信模块710可以包括至少一个能使该计算机系统与通信系统或其他计算机系统之间进行通信的模块。例如,通信模块710可以包括有线网络接口,广播接收模块、移动通信模块、无线因特网模块、局域通信模块和位置(或定位)信息模块等其中的一个或多个。这多种模块均在现有技术中有多种实现,本申请不一一描述。
传感器720可以感测系统的当前状态,诸如打开/闭合状态、位置、与用户是否有接触、 方向、和加速/减速,并且传感器720可以生成用于控制系统的操作的感测信号。传感器720中包含有红外线图像传感器、音频采集器、加速度传感器、陀螺仪、环境光传感器、接近光传感器以及地磁传感器中的一种或多种。
用户输入模块730,用于接收输入的数字信息、字符信息或接触式触摸操作/非接触式手势,以及接收与系统的用户设置以及功能控制有关的信号输入等。用户输入模块730包括触控面板和/或其他输入设备。
输出模块740包括显示面板,用于显示由用户输入的信息、提供给用户的信息或系统的各种菜单界面等。可选的,可以采用液晶显示器(liquid crystal display,LCD)或有机发光二极管(organic light-emitting diode,OLED)等形式来配置显示面板。在其他一些实施例中,触控面板可覆盖显示面板上,形成触摸显示屏。另外,输出模块740还可以包括音频输出模块、告警器以及触觉模块等。
音视频输入模块760,用于输入音频信号或视频信号。音视频输入模块760可以包括摄像头和麦克风。
电源780可以在处理器750的控制下接收外部电力和内部电力,并且提供系统的各个组件的操作所需的电力。
处理器750包括一个或多个处理器,处理器750为该计算机系统中的主处理器,例如,处理器750可以包括一个中央处理器和一个图形处理器。中央处理器在本申请中具有多个核,属于多核处理器。这多个核可以集成在同一块芯片上,也可以各自为独立的芯片。
存储器770存储计算机程序,该计算机程序包括操作系统程序772和应用程序771等。典型的操作系统如微软公司的Windows,苹果公司的MacOS等用于台式机或笔记本的系统,又如谷歌公司开发的基于
Figure PCTCN2019086127-appb-000003
的安卓
Figure PCTCN2019086127-appb-000004
系统等用于移动终端的系统。前述实施例提供的方法可以通过软件的方式实现,可以认为是操作系统程序772的具体实现。
存储器770可以是以下类型中的一种或多种:闪速(flash)存储器、硬盘类型存储器、微型多媒体卡型存储器、卡式存储器(例如SD或XD存储器)、随机存取存储器(random access memory,RAM)、静态随机存取存储器(static RAM,SRAM)、只读存储器(read only memory,ROM)、电可擦除可编程只读存储器(electrically erasable programmable read-only memory,EEPROM)、可编程只读存储器(programmable ROM,PROM)、回滚保护存储块(replay protected memory block,RPMB)、磁存储器、磁盘或光盘。在其他一些实施例中,存储器770也可以是因特网上的网络存储设备,系统可以对在因特网上的存储器770执行更新或读取等操作。
处理器750用于读取存储器770中的计算机程序,然后执行计算机程序定义的方法,例如处理器750读取操作系统程序772从而在该系统运行操作系统以及实现操作系统的各种功能,或读取一种或多种应用程序771,从而在该系统上运行应用。
存储器770还存储有除计算机程序之外的其他数据773。
AI处理器790作为协处理器挂载到处理器750上,用于执行处理器750给它分配的任务。在本实施例中,AI处理器790可以被情景识别模型调用从而实现情景识别中涉及的部分复杂算法。具体的,情景识别模型的AI算法在处理器750的多个核上运行,然后处理器750调用AI处理器790,AI处理器790实现的结果再返回给处理器750。
以上各个模块的连接关系仅为一种示例,本申请任意实施例提供的方法也可以应用在其它连接方式的终端设备中,例如所有模块通过总线连接。
在本申请实施例中,该终端设备所包括的处理器750还具有以下功能:
获取待处理数据,其中,该待处理数据由传感器采集得到的数据生成,该传感器中至少包含红外线图像传感器,该待处理数据中至少包含由该红外线图像传感器采集得到的图像数据生成的待处理图像数据;
通过情景识别模型确定该待处理数据所对应的目标情景,其中,该情景识别模型为传感数据集合以及情景类型集合训练得到的;
根据该目标情景确定业务处理方式。
处理器750具体用于执行如下步骤:
通过该情景识别模型中的AI算法确定该待处理数据所对应的该目标情景,其中,该AI算法包含深度学习算法,该AI算法运行于AI处理器790中。
处理器750具体用于执行如下步骤:
该传感器中至少还包含音频采集器以及第一子传感器中的一个,该待处理数据中至少包含待处理音频数据以及第一待处理子数据中的一个,其中,该待处理音频数据由该音频采集器采集得到的音频数据生成,该第一待处理子数据由该第一子传感器采集得到的第一子传感器数据生成。
处理器750具体用于执行如下步骤:
该处理器750还包含图像信号处理器、音频信号处理器以及该第一子传感器处理器中的至少一个,
该图像信号处理器,用于当到达图像采集预设运行时间时,通过该红外线图像传感器获取图像数据,其中该图像数据为该红外线图像传感器采集得到的数据;
该AI处理器790,具体用于通过该图像信号处理器获取该待处理图像数据,其中,该待处理图像数据由该图像信号处理器根据该图像数据生成;
和/或
该音频信号处理器,用于当到达音频采集预设运行时间时,通过该音频采集器获取该音频数据;
该AI处理器790,具体用于通过该音频信号处理器获取该待处理音频数据,其中,该待处理音频数据由该音频信号处理器根据该音频数据生成;
和/或
该第一子传感器处理器,用于当到达第一预设运行时间时,通过该第一子传感器获取第一子传感器数据,其中该第一子传感器数据为该第一子传感器采集得到的数据;
该协处理器,具体用于通过第一子传感器处理器获取该第一待处理子数据,其中,该第一待处理子数据由该第一子传感器处理器根据该第一子传感器数据生成。
处理器750具体用于执行如下步骤:
该协处理器,具体用于若该目标情景为扫描二维码情景,则根据该扫描二维码情景确定该业务处理方式为启动该终端设备主图像传感器和/或启动该终端设备中支持扫描二维码功能的应用程序。
处理器750具体用于执行如下步骤:
该协处理器,具体用于若该目标情景为会议情景,则根据该会议情景确定该业务处理方式为启动该终端设备的静音模式和/或启动该终端设备中应用程序的静音功能和/或在该终端设备的屏幕待机常显区显示静音模式图标,其中该静音模式图标用于启动该的静音模式。
处理器750具体用于执行如下步骤:
该协处理器,具体用于若该目标情景为运动情景,则根据该运动情景确定该业务处理方式为启动该终端设备的运动模式和/或启动该终端设备中应用程序的运动模式功能和/或在该终端设备的屏幕待机常显区显示音乐播放图标,其中,该终端设备的运动模式包括计步功能,该音乐播放图标用于开始播放或暂停播放音乐。
处理器750具体用于执行如下步骤:
该协处理器,具体用于若该目标情景为驾驶情景,则根据该驾驶情景确定该业务处理方式为启动该终端设备的驾驶模式和/或启动该终端设备中应用程序的驾驶模式功能和/或在该终端设备的屏幕待机常显区显示驾驶模式图标,其中,该终端设备的驾驶模式包括导航功能以及语音助手,该驾驶模式图标用于启动该驾驶模式。
图8为本申请实施例提供的一种AI处理器的结构示意图。AI处理器800与主处理器和外部存储器相连。AI处理器800的核心部分为运算电路803,通过控制器804控制运算电路803提取存储器中的数据并进行数学运算。
在一些实现中,运算电路803内部包括多个处理引擎(process engine,PE)。在一些实现中,运算电路803是二维脉动阵列。运算电路803还可以是一维脉动阵列或者能够执行例如乘法和加法这样的数学运算的其它电子线路。在另一些实现中,运算电路803是通用的矩阵处理器。
举例来说,假设有输入矩阵A,权重矩阵B,输出矩阵C。运算电路803从权重存储器802中取矩阵B相应的数据,并缓存在运算电路803的每一个PE上。运算电路803从输入存储器801中取矩阵A数据与矩阵B进行矩阵运算,得到的矩阵的部分结果或最终结果,保存在累加器(accumulator)808中。
统一存储器806用于存放输入数据以及输出数据。权重数据直接通过存储单元访问控制器805(例如direct memory access controller,DMAC)被搬运到权重存储器802中。输入数据也通过存储单元访问控制器805被搬运到统一存储器806中。
总线接口单元810(bus interface unit,BIU)用于AXI(advanced extensible interface)总线与存储单元访问控制器805和取指存储器809(instruction fetch buffer)的交互。
总线接口单元810用于取指存储器809从外部存储器获取指令,还用于存储单元访问控制器805从外部存储器获取输入矩阵A或者权重矩阵B的原数据。
存储单元访问控制器805主要用于将外部存储器中的输入数据搬运到统一存储器806或将权重数据搬运到权重存储器802中或将输入数据数据搬运到输入存储器801中。
向量计算单元807通常包括多个运算处理单元,在需要的情况下,对运算电路803的输出做进一步处理,如向量乘、向量加、指数运算、对数运算、和/或大小比较等等。
在一些实现中,向量计算单元807能将经处理的向量存储到统一存储器806中。例如,向量计算单元807可以将非线性函数应用到运算电路803的输出,例如累加值的向量,用以生成激活值。在一些实现中,向量计算单元807生成归一化的值、合并值,或二者均有。在一些实现中,经处理的向量能够用作运算电路803的激活输入。
与控制器804连接的取指存储器809用于存储控制器804使用的指令。
统一存储器806,输入存储器801,权重存储器802以及取指存储器809均为On-Chip存储器。图中的外部存储器与该AI处理器硬件架构独立。
下面对本申请实施例中一个实施例对应的业务处理装置进行详细描述,请参阅图9,图9为本申请实施例中业务处理装置的一个实施例示意图,本申请实施例中的业务处理装置90包括:
获取单元901,用于获取待处理数据,其中,该待处理数据由传感器采集得到的数据生成,该传感器中至少包含红外线图像传感器,该待处理数据中至少包含由该红外线图像传感器采集得到的图像数据生成的待处理图像数据;
确定单元902,用于通过情景识别模型确定该待处理数据所对应的目标情景,其中,该情景识别模型为传感数据集合以及情景类型集合训练得到的;
该确定单元902,还用于根据该目标情景确定业务处理方式。
本实施例中,获取单元901,用于获取待处理数据,其中,该待处理数据由传感器采集得到的数据生成,该传感器中至少包含红外线图像传感器,该待处理数据中至少包含由该红外线图像传感器采集得到的图像数据生成的待处理图像数据;确定单元902,用于通过情景识别模型确定该待处理数据所对应的目标情景,其中,该情景识别模型为传感数据集合以及情景类型集合训练得到的;该确定单元902,还用于根据该目标情景确定业务处理方式。
本申请实施例中,终端设备通过部署于终端设备内部或与终端设备相连的传感器采集数据,该传感器中至少包括有红外线图像传感器,根据采集得到的数据生成待处理数据,待处理数据中至少包含由红外线图像传感器采集得到的图像数据生成的待处理图像数据。终端设备获取待处理数据后,可以通过情景识别模型确定这些待处理数据所对应的目标情景,该情景识别模型为使用传感器采集得到的数据集合与对应不同数据的情景类型集合在线下训练得到,线下训练为使用深度学习框架进行模型设计与训练。当终端设备确定当前的目标情景后,可根据该目标情景确定对应的业务处理方式。通过使用传感器采集得到的数据与情景识别模型可确定当前终端设备所处的目标情景,并根据目标情景确定对应的业务处理方式,终端设备无需额外操作即可自动确定对应与目标情景的业务处理方式,提升用户的使用便利度。
在图9所对应的实施例的基础上,本申请实施例提供的业务处理装置90的另一个实施例中:
该确定单元902,具体用于通过该情景识别模型中的AI算法确定该待处理数据所对应的该目标情景,其中,该AI算法包含深度学习算法,该AI算法运行于AI处理器中。
本申请实施例中,终端设备具体使用情境识别模型中的AI算法确定待处理数据所对应的目标情景,AI算法中包含有深度学习算法,在运行于终端设备中的AI处理器上,由于 AI处理器具有强大的并行运算能力,运行AI算法时具有效率高的特点,因此情景识别模型使用AI算法确定具体的目标情景,AI算法运行于终端设备中的AI处理器上,提升了情景识别的效率,进一步提升了用户的使用便利度。
在图9所对应的实施例的基础上,本申请实施例提供的业务处理装置90的另一个实施例中:
该传感器中至少还包含音频采集器以及第一子传感器中的一个,该待处理数据中至少包含待处理音频数据以及第一待处理子数据中的一个,其中,该待处理音频数据由该音频采集器采集得到的音频数据生成,该第一待处理子数据由该第一子传感器采集得到的第一子传感器数据生成。
本申请实施例中,部署于终端设备中的传感器,除了红外线图像传感器以外,还包含有音频采集器以及第一子传感器中的一个,第一子传感器可以为加速度传感器、陀螺仪、环境光传感器、接近光传感器以及地磁传感器等传感器中的一种或多种。音频采集器采集得到音频数据,经终端设备处理后生成待处理音频数据。第一子传感器采集得到第一子传感器数据,经终端设备处理后生成待处理第一子传感器数据。终端设备使用多种传感器,多维度采集数据,提升了情景识别的准确性。
在图9所对应的实施例的基础上,本申请实施例提供的业务处理装置90的另一个实施例中:
该获取单元901,具体用于当到达图像采集预设运行时间时,该获取单元901通过该红外线图像传感器获取图像数据,其中该图像数据为该红外线图像传感器采集得到的数据;
该获取单元901,具体用于通过图像信号处理器获取该待处理图像数据,其中,该待处理图像数据由该图像信号处理器根据该图像数据生成;
和/或
该获取单元901,具体用于当到达音频采集预设运行时间时,该获取单元901通过该音频采集器获取该音频数据;
该获取单元901,具体用于通过音频信号处理器获取该待处理音频数据,其中,该待处理音频数据由该音频信号处理器根据该音频数据生成;
和/或
该获取单元901,具体用于当到达第一预设运行时间时,该获取单元901通过该第一子传感器获取第一子传感器数据,其中该第一子传感器数据为该第一子传感器采集得到的数据;
该获取单元901,具体用于通过第一子传感器处理器获取该第一待处理子数据,其中,该第一待处理子数据由该第一子传感器处理器根据该第一子传感器数据生成。
本申请实施例中,红外线图像传感器、音频采集器以及第一子传感器中的一种或多种,分别可以在达到各自的预设的运行时间之后,采集与传感器相对应的数据,采集得到原始的传感器数据后,终端设备使用与传感器对应的处理器处理原始的传感器数据生成待处理的传感器数据。通过设置预设的运行时间,定时开启传感器采集数据,采集得到的原始数据可以经过与传感器对应的处理器处理。降低了情景识别模型所占用的缓存空间,降低了情景识别模型的功耗,提升了终端设备的待机使用时长。
在图9所对应的实施例的基础上,本申请实施例提供的业务处理装置90的另一个实施例中:
该确定单元902,具体用于若该确定单元902确定该目标情景为扫描二维码情景,则该确定单元902根据该扫描二维码情景确定该业务处理方式为启动该终端设备主图像传感器和/或启动该终端设备中支持扫描二维码功能的应用程序。
本申请实施例中,终端设备根据终端设备中的一种或多种传感器采集得到的数据,确定与传感器采集得到的数据对应的目标情景为扫描二维码情景时,确定与扫描二维码所对应的业务处理方式,包括有启动终端设备中主图像传感器,终端设备可以使用该主图像传感器扫描二维码,终端设备还可以启动支持扫描二维码功能的应用程序,例如启动应用程序微信并打开微信中的扫描二维码功能。可以同时启动主图像传感器以及启动支持扫描二维码功能的应用程序,也可以根据预设的指令或接收用户的指令启动主图像传感器或启动支持扫描二维码的应用程序,此处不作限定。除了扫描二维码以外,还可以应用于扫描条形码等其它图形标识,此处不作限定。终端设备使用多维度传感器采集得到的数据,通过情景识别模型确定目标情景为扫描二维码情景后,可自动执行相关的业务处理方式,提升了终端设备的智能化程度,提升了用户的操作便捷性。
在图9所对应的实施例的基础上,本申请实施例提供的业务处理装置90的另一个实施例中:
该确定单元902,具体用于若该确定单元902确定该目标情景为会议情景,则该确定单元902根据该会议情景确定该业务处理方式为启动该终端设备的静音模式和/或启动该终端设备中应用程序的静音功能和/或在该终端设备的屏幕待机常显区显示静音模式图标,其中该静音模式图标用于启动该的静音模式。
本申请实施例中,终端设备根据终端设备中的一种或多种传感器采集得到的数据,确定与传感器采集得到的数据对应的目标情景为会议情景时,确定与会议情景所对应的业务处理方式,包括有启动终端设备的静音模式,终端设备处于静音模式时,运行于终端设备中的所有应用程序处于静音状态,终端设备还可以启动运行于终端设备中应用程序的静音功能,例如启动应用程序微信的静音功能,此时微信的提示音切换为静音,还可以在终端设备的屏幕待机常显区显示静音模式图标,终端设备可以通过静音模式图标接收用户的静音操作指令,终端设备响应于该静音操作指令启动静音模式。终端设备使用多维度传感器采集得到的数据,通过情景识别模型确定目标情景为会议情景后,可自动执行相关的业务处理方式,提升了终端设备的智能化程度,提升了用户的操作便捷性。
在图9所对应的实施例的基础上,本申请实施例提供的业务处理装置90的另一个实施例中:
该确定单元902,具体用于若该确定单元902确定该目标情景为运动情景,则该确定单元902根据该运动情景确定该业务处理方式为启动该终端设备的运动模式和/或启动该终端设备中应用程序的运动模式功能和/或在该终端设备的屏幕待机常显区显示音乐播放图标,其中,该终端设备的运动模式包括计步功能,该音乐播放图标用于开始播放或暂停播放音乐。
本申请实施例中,终端设备根据终端设备中的一种或多种传感器采集得到的数据,确 定与传感器采集得到的数据对应的目标情景为运动情景时,确定与运动情景所对应的业务处理方式,包括有启动终端设备的运动模式,终端设备处于运动模式时,终端设备启动计步应用程序以及生理数据监测应用程序,通过使用终端设备中相关传感器,记录用户的步数与相关生理数据。终端设备还可以启动终端设备中应用程序的运动模式功能,例如启动应用程序网易云音乐的运动功能,此时网易云音乐的播放模式为运动模式,还可以在终端设备的屏幕待机常显区显示音乐播放图标,终端设备可以通过音乐播放图标接收用户的音乐播放指令,终端设备响应于该音乐播放指令开始播放或暂停播放音乐。终端设备使用多维度传感器采集得到的数据,通过情景识别模型确定目标情景为运动情景后,可自动执行相关的业务处理方式,提升了终端设备的智能化程度,提升了用户的操作便捷性。
在图9所对应的实施例的基础上,本申请实施例提供的业务处理装置90的另一个实施例中:
该确定单元902,具体用于若该确定单元902确定该目标情景为驾驶情景,则该确定单元902根据该驾驶情景确定该业务处理方式为启动该终端设备的驾驶模式和/或启动该终端设备中应用程序的驾驶模式功能和/或在该终端设备的屏幕待机常显区显示驾驶模式图标,其中,该终端设备的驾驶模式包括导航功能以及语音助手,该驾驶模式图标用于启动该驾驶模式。
本申请实施例中,终端设备根据终端设备中的一种或多种传感器采集得到的数据,确定与传感器采集得到的数据对应的目标情景为驾驶情景时,确定与驾驶情景所对应的业务处理方式,包括有启动终端设备的驾驶模式,终端设备处于驾驶模式时,终端设备启动语音助手,终端设备可根据用户输入的语音指令执行相关操作,终端设备还可以启动导航功能。终端设备还可以启动终端设备中应用程序的驾驶模式功能,例如启动应用程序高德地图的驾驶功能,此时网易云音乐的导航模式为驾驶模式,还可以在终端设备的屏幕待机常显区显示驾驶模式图标,终端设备可以通过驾驶模式图标接收用户的驾驶模式指令,终端设备响应于该驾驶模式指令启动驾驶模式。终端设备使用多维度传感器采集得到的数据,通过情景识别模型确定目标情景为驾驶情景后,可自动执行相关的业务处理方式,提升了终端设备的智能化程度,提升了用户的操作便捷性。
所属领域的技术人员可以清楚地了解到,为介绍的方便和简洁,上述介绍的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所介绍的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各 个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (28)

  1. 一种业务处理的方法,其特征在于,所述方法应用于终端设备,所述方法包括:
    获取待处理数据,其中,所述待处理数据由传感器采集得到的数据生成,所述传感器中包括图像传感器,所述待处理数据中包括由所述图像传感器采集得到的图像数据生成的待处理图像数据;
    通过情景识别模型确定所述待处理数据所对应的目标情景,其中,所述情景识别模型为传感数据集合以及情景类型集合训练得到的;
    根据所述目标情景确定业务处理方式。
  2. 根据权利要求1所述的方法,其特征在于,所述通过所述情景识别模型确定所述待处理数据所对应的所述目标情景,包括:
    通过所述情景识别模型中的AI算法确定所述待处理数据所对应的所述目标情景,其中,所述AI算法包含深度学习算法,所述AI算法运行于AI处理器中。
  3. 根据权利要求2所述的方法,其特征在于,
    所述传感器中至少还包含音频采集器以及第一子传感器中的一个,所述待处理数据中至少包含待处理音频数据以及第一待处理子数据中的一个,其中,所述待处理音频数据由所述音频采集器采集得到的音频数据生成,所述第一待处理子数据由所述第一子传感器采集得到的第一子传感器数据生成。
  4. 根据权利要求3所述的方法,其特征在于,所述获取所述待处理数据,包括:
    当到达图像采集预设运行时间时,通过所述图像传感器获取图像数据,其中所述图像数据为所述图像传感器采集得到的数据;
    通过图像信号处理器获取所述待处理图像数据,其中,所述待处理图像数据由所述图像信号处理器根据所述图像数据生成;
    和/或
    当到达音频采集预设运行时间时,通过所述音频采集器获取所述音频数据;
    通过音频信号处理器获取所述待处理音频数据,其中,所述待处理音频数据由所述音频信号处理器根据所述音频数据生成;
    和/或
    当到达第一预设运行时间时,通过所述第一子传感器获取第一子传感器数据,其中所述第一子传感器数据为所述第一子传感器采集得到的数据;
    通过第一子传感器处理器获取所述第一待处理子数据,其中,所述第一待处理子数据由所述第一子传感器处理器根据所述第一子传感器数据生成。
  5. 根据权利要求1至4中任一项所述的方法,其特征在于,根据所述目标情景确定所述业务处理方式,包括:
    若所述目标情景为扫描二维码情景,则根据所述扫描二维码情景确定所述业务处理方式为启动所述终端设备主图像传感器和/或启动所述终端设备中支持扫描二维码功能的应用程序。
  6. 根据权利要求1至4中任一项所述的方法,其特征在于,根据所述目标情景确定所述业务处理方式,包括:
    若所述目标情景为会议情景,则根据所述会议情景确定所述业务处理方式为启动所述终端设备的静音模式和/或启动所述终端设备中应用程序的静音功能和/或在所述终端设备的屏幕待机常显区显示静音模式图标,其中所述静音模式图标用于启动所述的静音模式。
  7. 根据权利要求1至4中任一项所述的方法,其特征在于,根据所述目标情景确定所述业务处理方式,包括:
    若所述目标情景为运动情景,则根据所述运动情景确定所述业务处理方式为启动所述终端设备的运动模式和/或启动所述终端设备中应用程序的运动模式功能和/或在所述终端设备的屏幕待机常显区显示音乐播放图标,其中,所述终端设备的运动模式包括计步功能,所述音乐播放图标用于开始播放或暂停播放音乐。
  8. 根据权利要求1至4中任一项所述的方法,其特征在于,根据所述目标情景确定所述业务处理方式,包括:
    若所述目标情景为驾驶情景,则根据所述驾驶情景确定所述业务处理方式为启动所述终端设备的驾驶模式和/或启动所述终端设备中应用程序的驾驶模式功能和/或在所述终端设备的屏幕待机常显区显示驾驶模式图标,其中,所述终端设备的驾驶模式包括导航功能以及语音助手,所述驾驶模式图标用于启动所述驾驶模式。
  9. 一种终端设备,其特征在于,包括:传感器、处理器,所述传感器中至少包含图像传感器;
    所述处理器,用于获取待处理数据,其中,所述待处理数据由所述传感器采集得到的数据生成,所述待处理数据中至少包含由所述图像传感器采集得到的图像数据生成的待处理图像数据;
    所述处理器,还用于通过情景识别模型确定所述待处理数据所对应的目标情景,其中,所述情景识别模型为所述传感器获取的传感数据集合以及情景类型集合训练得到的;
    所述处理器,还用于根据所述目标情景确定业务处理方式。
  10. 根据权利要求9所述的终端设备,其特征在于,所述处理器中还包含协处理器以及AI处理器,
    所述处理器,具体用于通过所述情景识别模型中的AI算法确定所述待处理数据所对应的所述目标情景,其中,所述AI算法包含深度学习算法,所述AI算法运行于所述AI处理器中。
  11. 根据权利要求10所述的终端设备,其特征在于,所述传感器中还包含音频采集器以及第一子传感器中的至少一个。
  12. 根据权利要求11所述的终端设备,其特征在于,所述处理器还包含图像信号处理器、音频信号处理器以及所述第一子传感器处理器中的至少一个,
    所述图像信号处理器,用于当到达图像采集预设运行时间时,通过所述图像传感器获取图像数据,其中所述图像数据为所述图像传感器采集得到的数据;
    所述AI处理器,具体用于通过所述图像信号处理器获取所述待处理图像数据,其中,所述待处理图像数据由所述图像信号处理器根据所述图像数据生成;
    和/或
    所述音频信号处理器,用于当到达音频采集预设运行时间时,通过所述音频采集器获 取所述音频数据;
    所述AI处理器,具体用于通过所述音频信号处理器获取所述待处理音频数据,其中,所述待处理音频数据由所述音频信号处理器根据所述音频数据生成;
    和/或
    所述第一子传感器处理器,用于当到达第一预设运行时间时,通过所述第一子传感器获取第一子传感器数据,其中所述第一子传感器数据为所述第一子传感器采集得到的数据;
    所述协处理器,具体用于通过第一子传感器处理器获取所述第一待处理子数据,其中,所述第一待处理子数据由所述第一子传感器处理器根据所述第一子传感器数据生成。
  13. 根据权利要求9至12中任一项所述的终端设备,其特征在于,
    所述协处理器,具体用于若所述目标情景为扫描二维码情景,则根据所述扫描二维码情景确定所述业务处理方式为启动所述终端设备主图像传感器和/或启动所述终端设备中支持扫描二维码功能的应用程序。
  14. 根据权利要求9至12中任一项所述的终端设备,其特征在于,
    所述协处理器,具体用于若所述目标情景为会议情景,则根据所述会议情景确定所述业务处理方式为启动所述终端设备的静音模式和/或启动所述终端设备中应用程序的静音功能和/或在所述终端设备的屏幕待机常显区显示静音模式图标,其中所述静音模式图标用于启动所述的静音模式。
  15. 根据权利要求9至12中任一项所述的终端设备,其特征在于,
    所述协处理器,具体用于若所述目标情景为运动情景,则根据所述运动情景确定所述业务处理方式为启动所述终端设备的运动模式和/或启动所述终端设备中应用程序的运动模式功能和/或在所述终端设备的屏幕待机常显区显示音乐播放图标,其中,所述终端设备的运动模式包括计步功能,所述音乐播放图标用于开始播放或暂停播放音乐。
  16. 根据权利要求9至12中任一项所述的终端设备,其特征在于
    所述协处理器,具体用于若所述目标情景为驾驶情景,则根据所述驾驶情景确定所述业务处理方式为启动所述终端设备的驾驶模式和/或启动所述终端设备中应用程序的驾驶模式功能和/或在所述终端设备的屏幕待机常显区显示驾驶模式图标,其中,所述终端设备的驾驶模式包括导航功能以及语音助手,所述驾驶模式图标用于启动所述驾驶模式。
  17. 一种业务处理装置,其特征在于,所述业务处理装置应用于终端设备,包括:
    获取单元,用于获取待处理数据,其中,所述待处理数据由传感器采集得到的数据生成,所述传感器中至少包含图像传感器,所述待处理数据中至少包含由所述图像传感器采集得到的图像数据生成的待处理图像数据;
    确定单元,用于通过情景识别模型确定所述待处理数据所对应的目标情景,其中,所述情景识别模型为传感数据集合以及情景类型集合训练得到的;
    所述确定单元,还用于根据所述目标情景确定业务处理方式。
  18. 根据权利要求17所述的业务处理装置,其特征在于,包括:
    所述确定单元,具体用于通过所述情景识别模型中的AI算法确定所述待处理数据所对应的所述目标情景,其中,所述AI算法包含深度学习算法,所述AI算法运行于AI处理器中。
  19. 根据权利要求18所述的业务处理装置,其特征在于,
    所述传感器中至少还包含音频采集器以及第一子传感器中的一个,所述待处理数据中至少包含待处理音频数据以及第一待处理子数据中的一个,其中,所述待处理音频数据由所述音频采集器采集得到的音频数据生成,所述第一待处理子数据由所述第一子传感器采集得到的第一子传感器数据生成。
  20. 根据权利要求19所述的业务处理装置,其特征在于,包括:
    所述获取单元,具体用于当到达图像采集预设运行时间时,所述获取单元通过所述图像传感器获取图像数据,其中所述图像数据为所述图像传感器采集得到的数据;
    所述获取单元,具体用于通过图像信号处理器获取所述待处理图像数据,其中,所述待处理图像数据由所述图像信号处理器根据所述图像数据生成;
    和/或
    所述获取单元,具体用于当到达音频采集预设运行时间时,所述获取单元通过所述音频采集器获取所述音频数据;
    所述获取单元,具体用于通过音频信号处理器获取所述待处理音频数据,其中,所述待处理音频数据由所述音频信号处理器根据所述音频数据生成;
    和/或
    所述获取单元,具体用于当到达第一预设运行时间时,所述获取单元通过所述第一子传感器获取第一子传感器数据,其中所述第一子传感器数据为所述第一子传感器采集得到的数据;
    所述获取单元,具体用于通过第一子传感器处理器获取所述第一待处理子数据,其中,所述第一待处理子数据由所述第一子传感器处理器根据所述第一子传感器数据生成。
  21. 根据权利要求17至20中任一项所述的业务处理装置,其特征在于,包括:
    所述确定单元,具体用于若所述确定单元确定所述目标情景为扫描二维码情景,则所述确定单元根据所述扫描二维码情景确定所述业务处理方式为启动所述终端设备主图像传感器和/或启动所述终端设备中支持扫描二维码功能的应用程序。
  22. 根据权利要求17至20中任一项所述的业务处理装置,其特征在于,包括:
    所述确定单元,具体用于若所述确定单元确定所述目标情景为会议情景,则所述确定单元根据所述会议情景确定所述业务处理方式为启动所述终端设备的静音模式和/或启动所述终端设备中应用程序的静音功能和/或在所述终端设备的屏幕待机常显区显示静音模式图标,其中所述静音模式图标用于启动所述的静音模式。
  23. 根据权利要求17至20中任一项所述的业务处理装置,其特征在于,包括:
    所述确定单元,具体用于若所述确定单元确定所述目标情景为运动情景,则所述确定单元根据所述运动情景确定所述业务处理方式为启动所述终端设备的运动模式和/或启动所述终端设备中应用程序的运动模式功能和/或在所述终端设备的屏幕待机常显区显示音乐播放图标,其中,所述终端设备的运动模式包括计步功能,所述音乐播放图标用于开始播放或暂停播放音乐。
  24. 根据权利要求17至20中任一项所述的业务处理装置,其特征在于,包括:
    所述确定单元,具体用于若所述确定单元确定所述目标情景为驾驶情景,则所述确定 单元根据所述驾驶情景确定所述业务处理方式为启动所述终端设备的驾驶模式和/或启动所述终端设备中应用程序的驾驶模式功能和/或在所述终端设备的屏幕待机常显区显示驾驶模式图标,其中,所述终端设备的驾驶模式包括导航功能以及语音助手,所述驾驶模式图标用于启动所述驾驶模式。
  25. 一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行如权利要求1至8任意一项所述的方法。
  26. 一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行如权利要求1至8任意一项所述的方法。
  27. 一种业务处理的方法,其特征在于,所述方法应用于终端设备,所述终端设备上配置有常开的图像传感器,所述方法包括:
    获取数据,其中,所述数据包括所述图像传感器采集到的图像数据;
    通过情景识别模型确定所述数据所对应的目标情景,其中,所述情景识别模型为传感数据集合以及情景类型集合训练得到的;
    根据所述目标情景确定业务处理方式。
  28. 一种终端设备,其特征在于,所述终端设备上配置有常开的图像传感器,所述终端设备用于实现如权利要求1-8、以及27中任意一项所述的方法。
PCT/CN2019/086127 2018-11-21 2019-05-09 一种业务处理的方法以及相关装置 WO2020103404A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
KR1020217002422A KR20210022740A (ko) 2018-11-21 2019-05-09 서비스 처리 방법 및 관련 장치
AU2019385776A AU2019385776B2 (en) 2018-11-21 2019-05-09 Service processing method and related apparatus
EP19874765.1A EP3690678A4 (en) 2018-11-21 2019-05-09 SERVICE PROCESSING METHODS AND RELATED DEVICE
CA3105663A CA3105663C (en) 2018-11-21 2019-05-09 Service processing method and related apparatus
JP2021506473A JP7186857B2 (ja) 2018-11-21 2019-05-09 サービス処理方法および関連装置
US16/992,427 US20200372250A1 (en) 2018-11-21 2020-08-13 Service Processing Method and Related Apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811392818.7A CN111209904A (zh) 2018-11-21 2018-11-21 一种业务处理的方法以及相关装置
CN201811392818.7 2018-11-21

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/992,427 Continuation US20200372250A1 (en) 2018-11-21 2020-08-13 Service Processing Method and Related Apparatus

Publications (1)

Publication Number Publication Date
WO2020103404A1 true WO2020103404A1 (zh) 2020-05-28

Family

ID=70773748

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/086127 WO2020103404A1 (zh) 2018-11-21 2019-05-09 一种业务处理的方法以及相关装置

Country Status (8)

Country Link
US (1) US20200372250A1 (zh)
EP (1) EP3690678A4 (zh)
JP (1) JP7186857B2 (zh)
KR (1) KR20210022740A (zh)
CN (1) CN111209904A (zh)
AU (1) AU2019385776B2 (zh)
CA (1) CA3105663C (zh)
WO (1) WO2020103404A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210056220A1 (en) * 2019-08-22 2021-02-25 Mediatek Inc. Method for improving confidentiality protection of neural network model

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021122106A (ja) * 2020-01-31 2021-08-26 キヤノン株式会社 撮像装置、学習装置、撮像装置の制御方法、学習方法、学習済みモデルおよびプログラム
CN112507356B (zh) * 2020-12-04 2023-01-03 上海易校信息科技有限公司 一种基于Angular的集中式前端ACL权限控制方法
CN112862479A (zh) * 2021-01-29 2021-05-28 中国银联股份有限公司 一种基于终端姿态的业务处理方法及装置
CN113051052B (zh) * 2021-03-18 2023-10-13 北京大学 物联网系统按需设备调度规划方法与系统
CN113194211B (zh) * 2021-03-25 2022-11-15 深圳市优博讯科技股份有限公司 一种扫描头的控制方法及系统
CN117453105A (zh) * 2021-09-27 2024-01-26 荣耀终端有限公司 退出二维码的方法和装置
CN113935349A (zh) * 2021-10-18 2022-01-14 交互未来(北京)科技有限公司 一种扫描二维码的方法、装置、电子设备及存储介质
CN113900577B (zh) * 2021-11-10 2024-05-07 杭州逗酷软件科技有限公司 一种应用程序控制方法、装置、电子设备及存储介质
KR102599078B1 (ko) 2023-03-21 2023-11-06 고아라 큐티클 케어 세트 및 이를 이용한 큐티클 케어 방법

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110153617A1 (en) * 2009-12-18 2011-06-23 Toyota Motor Engineering & Manufacturing North America, Inc. Method and system for describing and organizing image data
CN107402964A (zh) * 2017-06-22 2017-11-28 深圳市金立通信设备有限公司 一种信息推荐方法、服务器及终端
CN107786732A (zh) * 2017-09-28 2018-03-09 努比亚技术有限公司 终端应用推送方法、移动终端及计算机可读存储介质
CN108322609A (zh) * 2018-01-31 2018-07-24 努比亚技术有限公司 一种通知信息调控方法、设备及计算机可读存储介质

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8756173B2 (en) * 2011-01-19 2014-06-17 Qualcomm Incorporated Machine learning of known or unknown motion states with sensor fusion
US8892162B2 (en) * 2011-04-25 2014-11-18 Apple Inc. Vibration sensing system and method for categorizing portable device context and modifying device operation
PL398136A1 (pl) * 2012-02-17 2013-08-19 Binartech Spólka Jawna Aksamit Sposób wykrywania kontekstu urzadzenia przenosnego i urzadzenie przenosne z modulem wykrywania kontekstu
WO2014020604A1 (en) * 2012-07-31 2014-02-06 Inuitive Ltd. Multiple sensors processing system for natural user interface applications
CN104268547A (zh) * 2014-08-28 2015-01-07 小米科技有限责任公司 一种基于图片内容播放音乐的方法及装置
CN115690558A (zh) * 2014-09-16 2023-02-03 华为技术有限公司 数据处理的方法和设备
US9633019B2 (en) * 2015-01-05 2017-04-25 International Business Machines Corporation Augmenting an information request
CN105138963A (zh) * 2015-07-31 2015-12-09 小米科技有限责任公司 图片场景判定方法、装置以及服务器
JP6339542B2 (ja) * 2015-09-16 2018-06-06 東芝テック株式会社 情報処理装置及びプログラム
JP6274264B2 (ja) * 2016-06-29 2018-02-07 カシオ計算機株式会社 携帯端末装置及びプログラム
WO2018084577A1 (en) * 2016-11-03 2018-05-11 Samsung Electronics Co., Ltd. Data recognition model construction apparatus and method for constructing data recognition model thereof, and data recognition apparatus and method for recognizing data thereof
US10592199B2 (en) * 2017-01-24 2020-03-17 International Business Machines Corporation Perspective-based dynamic audio volume adjustment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110153617A1 (en) * 2009-12-18 2011-06-23 Toyota Motor Engineering & Manufacturing North America, Inc. Method and system for describing and organizing image data
CN107402964A (zh) * 2017-06-22 2017-11-28 深圳市金立通信设备有限公司 一种信息推荐方法、服务器及终端
CN107786732A (zh) * 2017-09-28 2018-03-09 努比亚技术有限公司 终端应用推送方法、移动终端及计算机可读存储介质
CN108322609A (zh) * 2018-01-31 2018-07-24 努比亚技术有限公司 一种通知信息调控方法、设备及计算机可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3690678A4

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210056220A1 (en) * 2019-08-22 2021-02-25 Mediatek Inc. Method for improving confidentiality protection of neural network model

Also Published As

Publication number Publication date
CA3105663C (en) 2023-12-12
AU2019385776A1 (en) 2021-01-28
AU2019385776B2 (en) 2023-07-06
JP2021535644A (ja) 2021-12-16
EP3690678A1 (en) 2020-08-05
KR20210022740A (ko) 2021-03-03
CA3105663A1 (en) 2020-05-28
CN111209904A (zh) 2020-05-29
JP7186857B2 (ja) 2022-12-09
EP3690678A4 (en) 2021-03-10
US20200372250A1 (en) 2020-11-26

Similar Documents

Publication Publication Date Title
WO2020103404A1 (zh) 一种业务处理的方法以及相关装置
CN110045908B (zh) 一种控制方法和电子设备
CN109409161B (zh) 图形码识别方法、装置、终端及存储介质
CN115473957B (zh) 一种图像处理方法和电子设备
CN108399349B (zh) 图像识别方法及装置
CN111738122B (zh) 图像处理的方法及相关装置
US20230245398A1 (en) Image effect implementing method and apparatus, electronic device and storage medium
CN110059686B (zh) 字符识别方法、装置、设备及可读存储介质
US20220262035A1 (en) Method, apparatus, and system for determining pose
CN115079886B (zh) 二维码识别方法、电子设备以及存储介质
WO2022073417A1 (zh) 融合场景感知机器翻译方法、存储介质及电子设备
WO2022179604A1 (zh) 一种分割图置信度确定方法及装置
EP4175285A1 (en) Method for determining recommended scene, and electronic device
WO2022156473A1 (zh) 一种播放视频的方法及电子设备
CN110045958B (zh) 纹理数据生成方法、装置、存储介质及设备
CN113220176A (zh) 基于微件的显示方法、装置、电子设备及可读存储介质
WO2022143314A1 (zh) 一种对象注册方法及装置
WO2022161011A1 (zh) 生成图像的方法和电子设备
US9525825B1 (en) Delayed image data processing
CN115150542B (zh) 一种视频防抖方法及相关设备
WO2022089216A1 (zh) 一种界面显示的方法和电子设备
CN114071024A (zh) 图像拍摄方法、神经网络训练方法、装置、设备和介质
WO2023216957A1 (zh) 一种目标定位方法、系统和电子设备
CN116761082B (zh) 图像处理方法及装置
WO2024088130A1 (zh) 显示方法和电子设备

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2019874765

Country of ref document: EP

Effective date: 20200429

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19874765

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 3105663

Country of ref document: CA

ENP Entry into the national phase

Ref document number: 20217002422

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2019385776

Country of ref document: AU

Date of ref document: 20190509

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021506473

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE