WO2020103404A1 - 一种业务处理的方法以及相关装置 - Google Patents
一种业务处理的方法以及相关装置Info
- Publication number
- WO2020103404A1 WO2020103404A1 PCT/CN2019/086127 CN2019086127W WO2020103404A1 WO 2020103404 A1 WO2020103404 A1 WO 2020103404A1 CN 2019086127 W CN2019086127 W CN 2019086127W WO 2020103404 A1 WO2020103404 A1 WO 2020103404A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- terminal device
- sensor
- scenario
- processed
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 80
- 238000012545 processing Methods 0.000 claims abstract description 86
- 238000000034 method Methods 0.000 claims abstract description 80
- 230000006870 function Effects 0.000 claims description 93
- 238000004422 calculation algorithm Methods 0.000 claims description 63
- 238000012549 training Methods 0.000 claims description 27
- 230000033001 locomotion Effects 0.000 claims description 22
- 230000005236 sound signal Effects 0.000 claims description 22
- 238000013135 deep learning Methods 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 7
- 230000004913 activation Effects 0.000 claims description 3
- 230000003213 activating effect Effects 0.000 claims 3
- 230000015654 memory Effects 0.000 description 48
- 230000001133 acceleration Effects 0.000 description 37
- 230000008569 process Effects 0.000 description 28
- 238000010586 diagram Methods 0.000 description 18
- 238000004364 calculation method Methods 0.000 description 17
- 239000011159 matrix material Substances 0.000 description 11
- 238000003062 neural network model Methods 0.000 description 11
- 238000013528 artificial neural network Methods 0.000 description 10
- 238000013473 artificial intelligence Methods 0.000 description 9
- 238000013527 convolutional neural network Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 6
- 238000005070 sampling Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 4
- 238000013461 design Methods 0.000 description 4
- 210000002569 neuron Anatomy 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 238000012952 Resampling Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000013480 data collection Methods 0.000 description 3
- 238000009795 derivation Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000005358 geomagnetic field Effects 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000005855 radiation Effects 0.000 description 3
- 230000000295 complement effect Effects 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 210000000225 synapse Anatomy 0.000 description 2
- 101100498818 Arabidopsis thaliana DDR4 gene Proteins 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000004907 flux Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000002329 infrared spectrum Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000005389 magnetism Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000033764 rhythmic process Effects 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 230000000946 synaptic effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/143—Sensing or illuminating at different wavelengths
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/1408—Methods for optical code recognition the method being specifically adapted for the type of code
- G06K7/1417—2D bar codes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04817—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
Definitions
- This application relates to the field of artificial intelligence, in particular to a business processing method and related devices.
- terminal devices represented by smart phones account for an increasing proportion in people's lives.
- a smart phone in daily life, people can use a smart phone to scan a picture carrying a two-dimensional code to realize the function of related application programs or obtain information.
- the above smart phone scans the pictures carrying the two-dimensional code, which has defects such as complicated operation and low intelligence, which reduces the user's convenience.
- Embodiments of the present application provide a business processing method and related apparatus, which are applied to a terminal device.
- the terminal device can obtain data to be processed through a sensor in the terminal device, and the scene recognition model in the terminal device determines the current according to the data to be processed And determine the corresponding business processing method according to the current situation. Since the business processing method is the preset business processing method in the terminal device, it can simplify the user's operation steps, improve the intelligence of the operation, and improve the user's convenience. .
- an embodiment of the present application provides a method for business processing, which is applied to a terminal device and includes: acquiring data to be processed, wherein the data to be processed is generated from data collected by a sensor, and the sensor includes at least an infrared image A sensor, the to-be-processed data includes at least to-be-processed image data generated from the image data collected by the infrared image sensor; the target scene corresponding to the to-be-processed data is determined through a scene recognition model, wherein the scene recognition model is a sensor
- the data set and the scenario type set are obtained by training; the business processing method is determined according to the target scenario.
- the terminal device collects data through a sensor deployed inside the terminal device or connected to the terminal device.
- the sensor includes at least an infrared image sensor, and generates data to be processed based on the collected data.
- the data to be processed includes at least infrared light.
- Image data to be processed generated by the image data collected by the image sensor.
- the terminal device After the terminal device obtains the data to be processed, it can determine the target scenario corresponding to the data to be processed through the scenario recognition model.
- the scenario recognition model is obtained by training the data collection collected by the sensor and the scenario type set corresponding to different data offline. The next training is to use deep learning framework for model design and training. After the terminal device determines the current target scenario, it can determine the corresponding business processing method according to the target scenario.
- the target scenario of the current terminal device can be determined, and the corresponding business processing method can be determined according to the target scenario.
- the above infrared image sensor is always on. With the development of technology, the image sensor in this application may not be an infrared sensor, as long as the image can be collected, but the power consumption of the infrared sensor among the currently known sensors is low.
- determining the target scenario corresponding to the data to be processed through the scenario recognition model includes: determining the target scenario corresponding to the data to be processed through the AI algorithm in the scenario recognition model, Among them, the AI algorithm includes a deep learning algorithm, and the AI algorithm runs in the AI processor.
- the terminal device specifically uses the AI algorithm in the context recognition model to determine the target scenario corresponding to the data to be processed.
- the AI algorithm contains a deep learning algorithm, which runs on the AI processor running in the terminal device. It has powerful parallel computing capabilities and high efficiency when running AI algorithms. Therefore, the scene recognition model uses AI algorithms to determine specific target scenarios.
- the AI algorithm runs on the AI processor in the terminal device, which improves the efficiency of scene recognition. Further improve the user's convenience.
- the senor further includes at least one of an audio collector and a first sub-sensor
- the data to be processed includes at least audio data to be processed and first sub-data to be processed One, wherein the audio data to be processed is generated from the audio data collected by the audio collector, and the first sub-data to be processed is generated from the first sub-sensor data collected by the first sub-sensor.
- the sensors deployed in the terminal device include, in addition to the infrared image sensor, one of the audio collector and the first sub-sensor.
- the first sub-sensor may be an acceleration sensor, a gyroscope, an ambient light sensor, or a proximity sensor.
- the audio collector collects audio data, and then processes the terminal device to generate audio data to be processed.
- the first sub-sensor data is collected by the first sub-sensor, and processed by the terminal device to generate first sub-sensor data to be processed.
- the terminal equipment uses multiple sensors to collect data in multiple dimensions, which improves the accuracy of scene recognition.
- acquiring the data to be processed includes: when the preset operation time of image acquisition is reached, acquiring image data through the infrared image sensor, where the image data is acquired by the infrared image sensor The data to be processed; the image data to be processed is acquired by an image signal processor, wherein the image data to be processed is generated by the image signal processor according to the image data; and / or when the preset operating time of audio collection is reached, the audio
- the collector acquires the audio data; acquires the to-be-processed audio data through an audio signal processor, wherein the to-be-processed audio data is generated by the audio signal processor according to the audio data; and / or when the first preset runtime is reached , Acquiring first sub-sensor data through the first sub-sensor, where the first sub-sensor data is data collected by the first sub-sensor; acquiring the first sub-data to be processed through the first sub-sensor processor, wherein, The first sub-sensor data to be processed is generated by the first sub-sensor processor
- one or more of the infrared image sensor, the audio collector, and the first sub-sensor can respectively collect data corresponding to the sensor after the respective preset operating time is reached to obtain the original sensor
- the terminal device uses the processor corresponding to the sensor to process the original sensor data to generate the sensor data to be processed.
- the sensor is started to collect data regularly, and the collected raw data can be processed by the processor corresponding to the sensor.
- the cache space occupied by the scene recognition model is reduced, the power consumption of the scene recognition model is reduced, and the standby time of the terminal device is improved.
- determining the business processing mode according to the target scenario includes: if the target scenario is a scanning QR code scenario, determining the business processing mode according to the scanning QR code scenario is to start the The main image sensor of the terminal device and / or the application program supporting the function of scanning the QR code in the terminal device is started.
- the terminal device determines, according to the data collected by one or more sensors in the terminal device, that the target scenario corresponding to the data collected by the sensor is a scan QR code scenario, and determines the corresponding Business processing methods include starting the main image sensor in the terminal device, the terminal device can use the main image sensor to scan the QR code, and the terminal device can also start the application program that supports the function of scanning the QR code, such as starting the application WeChat and opening WeChat Scan QR code function in. You can start the main image sensor and the application that supports the QR code scanning function at the same time, or you can start the main image sensor or the application that supports the QR code scanning according to a preset command or receive a user's instruction, which is not limited here.
- the terminal device uses the data collected by the multi-dimensional sensor, and determines the target scenario as the scanning QR code scenario through the scenario recognition model. It can automatically execute related business processing methods, which improves the intelligence of the terminal device and enhances the user's convenient operation. Sex.
- determining the business processing mode according to the target scenario includes: if the target scenario is a conference scenario, determining the business processing mode according to the conference scenario is to activate the mute mode of the terminal device and / Or start the mute function of the application in the terminal device and / or display a mute mode icon in the screen standby normal display area of the terminal device, wherein the mute mode icon is used to start the mute mode.
- the terminal device determines that the target scenario corresponding to the data collected by the sensor is the conference scenario based on the data collected by one or more sensors in the terminal device
- the business processing method corresponding to the conference scenario is determined, including There is a mute mode for starting the terminal device.
- the terminal device is in the mute mode, all applications running in the terminal device are in a mute state.
- the terminal device can also start the mute function of the application running in the terminal device, such as starting the application WeChat Mute function, at this time, the prompt sound of WeChat is switched to mute, and the mute mode icon can also be displayed on the terminal standby display area of the terminal device.
- the terminal device can receive the user's mute operation instruction through the mute mode icon.
- the command starts the silent mode.
- the terminal device uses the data collected by the multi-dimensional sensors, and determines the target scenario as the conference scenario through the scenario recognition model. It can automatically execute related business processing methods, which improves the intelligence of the terminal device and enhances the user's convenience of operation.
- determining the business processing method according to the target scenario includes: determining the business processing method according to the target scenario, including: if the target scenario is a sports scenario, determining the business scenario according to the sports scenario
- the service processing method is to start the motion mode of the terminal device and / or start the motion mode function of the application program in the terminal device and / or display a music playback icon on the screen standby normal display area of the terminal device, wherein the motion of the terminal device
- the mode includes a step counting function, and the music playing icon is used to start playing or pause playing music.
- the terminal device determines that the target scenario corresponding to the data collected by the sensor is a sports scenario based on the data collected by one or more sensors in the terminal device
- the business processing method corresponding to the sports scenario is determined, including There is a motion mode for starting the terminal device.
- the terminal device starts the pedometer application and the physiological data monitoring application, and records the user's steps and related physiological data by using relevant sensors in the terminal device.
- the terminal device can also start the motion mode function of the application program in the terminal device, for example, the motion function of the application NetEase Cloud Music.
- the playback mode of NetEase Cloud Music is the sports mode, and it can also be displayed in the standby display area of the screen of the terminal device
- the music playing icon the terminal device can receive the user's music playing instruction through the music playing icon, and the terminal device starts playing or pauses playing music in response to the music playing instruction.
- the terminal equipment uses the data collected by the multi-dimensional sensors to determine the target scene as a sports scene through the scene recognition model, and can automatically execute the relevant business processing methods, which improves the intelligence of the terminal equipment and enhances the user's convenience of operation.
- determining the business processing method according to the target scenario includes: determining the business processing method according to the target scenario, including: if the target scenario is a driving scenario, determining the business scenario according to the driving scenario
- the business processing method is to start the driving mode of the terminal device and / or start the driving mode function of the application in the terminal device and / or display the driving mode icon on the screen standby normal display area of the terminal device, wherein the driving of the terminal device
- the mode includes a navigation function and a voice assistant, and the driving mode icon is used to start the driving mode.
- the business processing method corresponding to the driving scenario is determined, including There is a driving mode for starting the terminal device.
- the terminal device starts a voice assistant, the terminal device can perform related operations according to a voice instruction input by the user, and the terminal device can also start a navigation function.
- the terminal device can also start the driving mode function of the application program in the terminal device, for example, the driving function of the application Gaode map.
- the navigation mode of NetEase Cloud Music is the driving mode, and it can also be displayed in the screen standby normal display area of the terminal device
- the driving mode icon the terminal device can receive the user's driving mode instruction through the driving mode icon, and the terminal device starts the driving mode in response to the driving mode instruction.
- the terminal device uses the data collected by the multi-dimensional sensors, and after determining the target scenario as the driving scenario through the scenario recognition model, it can automatically execute related business processing methods, which improves the intelligence of the terminal device and improves the user's convenience of operation.
- an embodiment of the present application provides a terminal device, including: a sensor and a processor, where the sensor includes at least an infrared image sensor; the processor is used to obtain data to be processed, and the data to be processed is determined by the The data collected by the sensor is generated, and the to-be-processed data includes at least the to-be-processed image data generated by the image data collected by the infrared image sensor; the processor is also used to determine the corresponding to-to-be-processed data through a scene recognition model A target scenario, wherein the scenario recognition model is obtained by training the sensor data set and the scenario type set acquired by the sensor; the processor is also used to determine a business processing method according to the target scenario. The processor is also used to execute the business processing method as described in the first aspect.
- an embodiment of the present application provides a business processing apparatus.
- the business processing apparatus is applied to a terminal device and includes: an acquiring unit for acquiring data to be processed, wherein the data to be processed is generated from data collected by a sensor ,
- the sensor includes at least an infrared image sensor, and the to-be-processed data includes at least to-be-processed image data generated from image data collected by the infrared image sensor; a determining unit, configured to determine the to-be-processed data corresponding to the scene recognition model
- the target scenario of the scenario wherein the scenario recognition model is trained from the sensor data set and the scenario type set; the determining unit is also used to determine the business processing method according to the target scenario.
- a possible implementation manner of the third aspect includes: the determining unit, specifically configured to determine the target scenario corresponding to the data to be processed through an AI algorithm in the scenario recognition model, wherein the AI algorithm includes depth Learning algorithm, the AI algorithm runs in the AI processor.
- a possible implementation manner of the third aspect includes: the sensor further includes at least one of an audio collector and a first sub-sensor, and the data to be processed includes at least the audio data to be processed and the first sub-process One of the data, wherein the to-be-processed audio data is generated from the audio data collected by the audio collector, and the first to-be-processed sub-data is generated from the first sub-sensor data collected by the first sub-sensor.
- the method includes: the acquiring unit is specifically configured to acquire image data through the infrared image sensor when the preset operation time of image acquisition is reached, wherein the image data is the The data acquired by the infrared image sensor; the acquisition unit is specifically configured to acquire the image data to be processed through an image signal processor, wherein the image data to be processed is generated by the image signal processor based on the image data; and / or The obtaining unit is specifically used to obtain the audio data through the audio collector when the preset time for audio collection is reached; the obtaining unit is specifically used to obtain the to-be-processed audio data through an audio signal processor, wherein, The to-be-processed audio data is generated by the audio signal processor according to the audio data; and / or the acquiring unit is specifically configured to acquire the first sub-sensor through the first sub-sensor when the first preset running time is reached Sensor data, wherein the first sub-sensor data is data collected by the first sub-sensor; the acquiring unit is specifically configured to acquire the first sub-sensor through the first sub-sensor when the
- the method includes: the determining unit, specifically configured to determine the target scenario according to the scanned two-dimensional code scenario if the determined unit determines that the target scenario is a scanned two-dimensional code scenario
- the service processing method is to start the main image sensor of the terminal device and / or start an application program supporting the function of scanning a two-dimensional code in the terminal device.
- a possible implementation manner of the third aspect includes: the determining unit, which is specifically configured to determine that the business processing method is to start the business process according to the meeting scenario if the determining unit determines that the target scenario is a meeting scenario
- a possible implementation manner of the third aspect includes: the determining unit, which is specifically configured to determine that the business processing method is to start the business processing mode according to the sports scenario if the determining unit determines that the target scenario is a sports scenario
- the music play icon is used to start playing or pause playing music.
- a possible implementation manner of the third aspect includes: the determining unit, which is specifically configured to determine that the business processing method is to start the business process according to the driving scenario if the determining unit determines that the target scenario is a driving scenario
- an embodiment of the present application provides a computer program product containing instructions, which, when the computer program product runs on a computer, causes the computer to execute the method for processing a storage block as described in the first aspect.
- an embodiment of the present application provides a computer-readable storage medium that stores instructions for packet processing, and when it runs on a computer, causes the computer to perform the operations described in the first aspect Storage block processing method.
- the present application provides a chip system including a processor for supporting network devices to implement the functions involved in the above aspects, for example, for example, sending or processing data and / or data involved in the above methods information.
- the chip system further includes a memory, which is used to store necessary program instructions and data of the network device.
- the chip system may be composed of chips, or may include chips and other discrete devices.
- the present application provides a method for business processing.
- the method is applied to a terminal device, and the terminal device is equipped with a normally-open image sensor.
- the method includes: acquiring data, where the data includes all The image data collected by the image sensor; determining the target scenario corresponding to the data through a scenario recognition model, wherein the scenario recognition model is obtained by training the sensor data set and the scenario type set; determining the business according to the target scenario Processing method.
- the present application provides a terminal device configured with a normally-open image sensor, and the terminal device is used to implement the method described in any of the foregoing implementation manners.
- the terminal device can obtain the data to be processed through the sensor in the terminal device, and the scenario recognition model in the terminal device determines the current scenario according to the data to be processed and determines the corresponding business processing method according to the current scenario. It is a preset method of processing business in the terminal device, so it can simplify the user's operation steps, improve the intelligence of the operation, and improve the user's convenience.
- the terminal device is specifically a smart phone. When the smart phone is off and needs to scan a picture carrying a two-dimensional code, the smart phone can automatically realize the function of related applications or obtain information without additional operations, improving user convenience. degree.
- FIG. 1a is a schematic diagram of a system architecture in an embodiment of the present application.
- FIG. 1b is a schematic diagram of another system architecture in an embodiment of the present application.
- FIG. 2 is a schematic diagram of usage scenarios involved in the service processing method provided by an embodiment of the present application
- FIG. 3 is a schematic diagram of an embodiment of a service processing method provided by an embodiment of the present application.
- FIG. 4 is a schematic diagram of an embodiment of an application program intelligently starting provided by an embodiment of the present application
- FIG. 5 is a schematic diagram of an embodiment of an intelligent recommendation service provided by an embodiment of this application.
- FIG. 6 is a schematic flowchart of an application scenario of a method for business processing in an embodiment of the present application
- FIG. 7 is a schematic structural diagram of a computer system provided by an embodiment of the present application.
- FIG. 8 is a schematic structural diagram of an AI processor provided by an embodiment of this application.
- FIG. 9 is a schematic diagram of an embodiment of a service processing device in an embodiment of the present application.
- the present application provides a business processing method and related apparatus.
- the terminal device can obtain data to be processed through a sensor in the terminal device, and the scene recognition model in the terminal device determines the current scenario based on the data to be processed and determines the correspondence according to the current scenario Since the business processing method is a method for processing business preset in the terminal device, it can simplify the user's operation steps, increase the intelligence of the operation, and improve the user's convenience.
- processors also called cores or computing units
- these cores constitute a processor.
- the cores in the embodiments of the present application mainly relate to heterogeneous cores, and the types of these cores include but are not limited to the following:
- the central processing unit is a very large-scale integrated circuit, which is the computing core and control unit of a computer. Its function is mainly to interpret computer instructions and process data in computer software.
- GPUs Graphics processors
- display cores also known as display cores, visual processors, and display chips
- microprocessor for image calculation work.
- DSP Digital signal processor
- DSP refers to a chip that can implement digital signal processing technology.
- the DSP chip uses a Harvard structure with separate programs and data. It has a dedicated hardware multiplier, is widely used in pipeline operations, and provides special DSP instructions that can be used to quickly implement various digital signal processing algorithms.
- ISP Image signal processor
- main function is to post-process the data output by the image sensor
- main function is linear Correction, noise removal, dead pixel correction, interpolation, white balance, and automatic exposure.
- ASP Audio signal processor
- ASP refers to a chip that can realize audio signal processing calculation
- ASP is a kind of DSP chip
- the main function is to post-process the data output by the audio collector
- the main function is sound Source localization, sound source enhancement, echo cancellation, and noise suppression technology.
- AI processor artificial intelligence
- AI processors also known as artificial intelligence processors or AI accelerators, are processing chips that run artificial intelligence algorithms. They are usually implemented with application-specific integrated circuits (ASICs) or field-programmable gate arrays (field-programmable Gate (array, FPGA) can also be implemented with GPU, which is not limited here, AI processor uses a systolic array structure, in this array structure, the data is processed in the array according to a predetermined "pipeline” method Rhythm "flow" between units. In the process of data flow, all processing units simultaneously process the data flowing through it in parallel, so it can achieve a high parallel processing speed.
- ASICs application-specific integrated circuits
- FPGA field-programmable Gate
- GPU which is not limited here
- AI processor uses a systolic array structure, in this array structure, the data is processed in the array according to a predetermined "pipeline” method Rhythm "flow" between units. In the process of data flow, all processing units simultaneously process the data flowing through it in parallel, so it can achieve a high parallel processing
- the AI processor may specifically be a neural network processor (neural-network processing unit, NPU), a tensor processor (tensor processing unit, TPU), an intelligence processor (intelligence processing unit, IPU), and a GPU.
- NPU neural-network processing unit
- TPU tensor processing unit
- IPU intelligence processing unit
- GPU GPU
- Neural network processor neural-network processing unit, NPU
- NPU simulates human neurons and synapses at the circuit layer, and uses deep learning instruction sets to directly process large-scale neurons and synapses.
- One instruction completes a group Processing of neurons.
- the NPU realizes the integration of storage and calculation through synaptic weights, thereby greatly improving the operating efficiency.
- TPU Tensor processor
- IPU intelligent processor
- sensors are provided on the terminal device, and the terminal device obtains external information through these sensors.
- the sensors involved in the embodiments of the present application include but are not limited to the following types:
- IR-RGB image sensor Infrared image-radiation-red green image sensor (IR-RGB image sensor), using CCD unit (charge-coupled device) or standard CMOS unit (complementary meta-oxide semiconductor, complementary metal oxide (Semiconductor), filtering through the filter, only allowing light in the color wavelength band and the set infrared wavelength band to separate the IR (infrared radiation) image data stream and RGB (red green) blue, three primary colors in the image signal processor ) Image data stream, IR image data stream is the image data stream obtained under low-light environment, and the two separated image data streams are used for other application processing.
- CCD unit charge-coupled device
- CMOS unit complementary meta-oxide semiconductor, complementary metal oxide (Semiconductor)
- Acceleration sensor acceleration sensor
- the acceleration sensor is used to measure the acceleration change value of the object, usually measured from three directions of X, Y and Z
- the size of the X direction value represents the horizontal movement of the terminal device
- the size of the Y direction value It represents the vertical movement of the terminal device
- the magnitude of the Z direction value represents the spatial vertical movement of the terminal device.
- it is used to measure the speed and direction of movement of the terminal device, for example: when the user is holding the terminal device, it will swing up and down, so that it can detect that the acceleration changes back and forth in a certain direction, by detecting this The number of steps can be calculated back and forth.
- Gyroscope a gyroscope is a sensor that measures the angular velocity of an object around a certain axis of rotation.
- the gyroscope used in the terminal device is a micro-electro-mechanical-systems gyroscope , MEMS gyroscope), the common MEMS gyroscope chip is a three-axis gyroscope chip, which can track the displacement changes in 6 directions.
- the three-axis gyroscope chip can obtain the change value of the angular acceleration of the terminal device in the three directions of x, y, and z, and is used to detect the rotation direction of the terminal device.
- Ambient light sensor is a sensor that measures the change of the outside light, and measures the change of the outside light intensity based on the photoelectric effect. It is used in terminal equipment to adjust the brightness of the display screen of the terminal equipment. And because the display screen is usually the most power-consuming part of the terminal device, the use of ambient light sensors to help adjust the brightness of the screen can further extend the battery life.
- Proximity sensor proximity sensor
- the proximity light sensor consists of an infrared emission lamp and infrared radiation light detector.
- the proximity light sensor is located near the earpiece of the terminal device. When the terminal device is close to the ear, the system uses the proximity light sensor to know that the user is talking on the phone, and then turns off the display screen to prevent the user from affecting the call due to misoperation.
- the working principle of the proximity light sensor is that the invisible infrared light emitted by the infrared emission lamp is reflected by the nearby objects and then detected by the infrared radiation light detector. Generally, the invisible infrared light emitted adopts the near infrared spectrum band.
- Geomagnetic sensor a type of geomagnetic sensor is a kind of use of the measured object's movement state in the geomagnetic field is different, because the magnetic flux distribution of the geomagnetic field in different directions is different, so it can be changed by sensing the distribution of the geomagnetic field
- a measuring device that indicates information such as the attitude and motion angle of the measured object. Generally used in the compass or navigation application of the terminal device, it helps the user to achieve accurate positioning by calculating the specific orientation of the terminal device in the three-dimensional space.
- the service processing method provided in the embodiments of the present application can be applied to a terminal device.
- the terminal device can be said to be a mobile phone, a tablet personal computer, a laptop computer, a digital camera, and a personal digital assistant. assistant (PDA for short), navigation device, mobile internet device (MID), wearable device (wearable device), smart watch, smart bracelet, etc.
- PDA personal digital assistant
- MID mobile internet device
- MID wearable device
- smart watch smart bracelet
- the system that the terminal device can carry can include Or other operating systems, etc., this embodiment of the present application does not make any limitation on this.
- FIG. 1a is a schematic diagram of a system architecture in an embodiment of the present application.
- the terminal device can be logically divided into a hardware layer, an operating system, and an application layer.
- the hardware layer includes hardware resources such as a main processor, a microcontroller unit, a modem, a Wi-Fi module, a sensor, and a positioning module.
- the application layer includes one or more applications, such as applications, which can be any type of applications such as social applications, e-commerce applications, browsers, multimedia applications, and navigation applications, as well as scene recognition models and Applications such as artificial intelligence algorithms.
- the operating system is an application program that manages and controls hardware and software resources.
- the hardware layer in addition to the main processor, sensor, Wi-Fi module and other hardware resources, it also includes an always-on (always on, AO) area.
- the hardware in the always-on area is usually turned on 24/7, and the Including sensor control center (sensor hub), AI processor and sensor and other hardware resources, sensor hub contains coprocessor and sensor processor, sensor processor is used to process sensor output data, AI processor and sensor processor generated After the data is further processed by the co-processor, the co-processor establishes interaction with the main processor.
- the sensors in the always-on zone include: infrared image sensors, gyroscopes, acceleration sensors, and audio collectors (mic), etc.
- the sensor processors include: mini image signal processor (mini) ISP and audio signal processor (ASP) ).
- FIG. 1b is a schematic diagram of another system architecture in an embodiment of the present application.
- the operating system includes a kernel, a hardware abstraction layer (HAL), a library and a runtime, and a framework.
- the kernel is used to provide low-level system components and services, such as: power management, memory management, thread management, hardware drivers, etc .; hardware drivers include Wi-Fi drivers, sensor drivers, positioning module drivers, etc.
- the hardware abstraction layer encapsulates the kernel driver, provides an interface to the framework, and shields low-level implementation details.
- the hardware abstraction layer runs in user space, while the kernel driver runs in kernel space.
- the library and runtime are also called runtime libraries, which provide the library files and execution environment required by the executable program at runtime.
- Libraries and runtimes include Android runtime (ART) and libraries.
- ART is a virtual machine or virtual machine instance that can convert the bytecode of an application into machine code.
- the library is a program library that provides support for executable programs at runtime, including browser engines (such as webkit), script execution engines (such as JavaScript engines), and graphics processing engines.
- the framework is used to provide various basic common components and services for applications in the application layer, such as window management, location management, and so on.
- the framework may include a phone manager, resource manager, location manager, etc.
- each component of the operating system described above can be implemented by the main processor executing a program stored in the memory.
- the terminal may include fewer or more components than those shown in FIGS. 1a and 1b, and the terminal device shown in FIGS. 1a and 1b only includes multiple components disclosed in the embodiments of the present application. Components that are more relevant to implementation.
- FIG. 2 is a schematic diagram of a usage scenario involved in the service processing method provided by an embodiment of the present application.
- a processor is provided on the terminal device, and the processor includes at least two cores.
- the at least two cores may include CPU and AI processor.
- AI processors include but are not limited to neural network processors, tensor processors and GPUs. These chips can be called cores and are used to perform calculations on terminal devices. Among them, different cores have different energy efficiency ratios.
- the terminal device can use specific algorithms to perform different application services.
- the method of the embodiment of the present application involves running a scenario recognition model.
- the terminal device can use the scenario recognition model to determine the target scenario where the user currently using the terminal device is located, and according to the determined target scenario Perform different business processes.
- the terminal device determines the target scenario where the user currently using the terminal device is located, it will determine different target scenarios based on the data collected by different sensors and the AI algorithm in the scenario recognition model.
- the embodiments of the present application provide a business processing method.
- the following embodiments of the present application mainly determine the target scenario where the terminal device is located and the corresponding target scenario for the data collected by the terminal device according to different sensors and the scenario recognition model Business processing.
- FIG. 3 is a schematic diagram of an embodiment of a service processing method provided by an embodiment of the present application.
- processing methods include:
- the terminal device starts a timer connected to the sensor, and the timer is used to indicate a time interval for the sensor connected to it to collect data.
- the coprocessor in the AO area sets the timing of timers corresponding to different sensors according to the requirements of the scene recognition model.
- the timer corresponding to the acceleration sensor can be set to 100 milliseconds (ms), meaning that the acceleration data is collected every 100 ms and stored in the buffer area specified by the terminal device.
- the timing time here can be set according to the requirements of the scene recognition model, and can also be set according to various requirements such as sensor life, cache space occupancy rate and power consumption.
- the infrared image sensor itself can Infrared images with a higher frame rate are collected, but long-term continuous collection will cause damage to the sensor itself and affect the lifespan.
- continuous acquisition for a long time will cause the power consumption of the infrared image sensor to increase, reducing the use time of the terminal device.
- the timer time of the timer connected to the infrared image sensor can be set, for example: in the face recognition scenario, the time of image acquisition can be set to 1/6 second, that is, per second Collect 10 frames of images; in other recognition scenarios, you can set the timing of collecting images to 1 second, that is, 1 frame of images per second. It can also be: when the terminal device is in the low-battery mode, the timing time is set to 1 second, so as to extend the use time of the terminal device. For some sensors with low power consumption and small storage space occupied by the collected data, the timing of the sensor may not be set to achieve the purpose of collecting data in real time.
- timer may be a chip with a timing function connected to the sensor, or a timing function built in the sensor, which is not limited herein.
- the timer after the timer reaches the timing time, it instructs the connected sensor to start and collect data.
- the specific sensors that need to collect data are selected by the coprocessor based on the scenario recognition model. For example, when it is necessary to determine whether it is currently in a QR code scanning scenario, the terminal device acquires data through an infrared image sensor, and after processing and computing the data collected by the infrared image sensor, the scene recognition process can be completed.
- the terminal device also needs to use the audio collector to collect data. The data collected by the infrared image sensor and the audio collector After processing and calculation, the scene recognition process can be completed.
- the infrared image sensor collects image data, and the image data includes an IR image and an RGB image, where the IR image is a grayscale image, which can be used to display low light
- the RGB image is a color image
- the infrared image sensor stores the collected image data in the buffer space for subsequent steps.
- the first application scenario is that the first infrared image sensor is arranged in the same plane as the main screen of the terminal device in the terminal device;
- the second application scenario The second infrared image sensor is arranged in the terminal device in the same plane as the main image sensor of the terminal device. The two cases are introduced below.
- the first infrared image sensor can collect image data projected onto the main screen of the terminal device, for example, when the user uses the terminal device to perform a self-timer operation, the first infrared image sensor is arranged in the same plane as the main screen of the terminal device.
- An infrared image sensor can collect image data of the user's face.
- the second infrared image sensor can collect image data projected onto the main image sensor of the terminal device.
- the main image sensor of the terminal device when a user uses the main image sensor of the terminal device to scan a two-dimensional code, it is arranged on the main device of the terminal device.
- the second infrared image sensor in the same plane as the image sensor can collect two-dimensional code image data.
- the first infrared image sensor and the second infrared ray sensor may be arranged at the same time, and the arrangement manner and the data collection manner are similar to the foregoing manners, and will not be repeated here.
- the audio collector can be arranged at any position on the casing of the terminal device.
- the audio data of the environment where the terminal device is located is collected at a sampling frequency of 16 kHz.
- the acceleration sensor is arranged in the always area inside the terminal equipment, using a two-wire serial bus interface (inter-integrated circuit, I2C) or serial peripheral interface (serial peripheral interface, SPI) SPI and sensor hub connected, generally provided
- I2C inter-integrated circuit
- SPI serial peripheral interface
- the acceleration measurement range of ⁇ 2 gravity (G) to ⁇ 16 gravity (G), the accuracy of the collected acceleration data is less than 16 bits.
- the data collected by the sensor can be directly sent to the sensor processor or the scene recognition model for processing, or can be stored in the cache area.
- the sensor processor or the scene recognition model can be processed by reading the sensor data in the cache area , Not limited here.
- the sensor processor processes the data
- the collected data can be generated by a sensor processor corresponding to the sensor, also known as a digital signal processor corresponding to the sensor, after data preprocessing is performed for subsequent scene recognition The pending data used by the model.
- miniISP After acquiring the image data collected by the infrared image sensor, the miniISP processes the image data, for example, when the resolution of the image data collected by the sensor (image resolution) When it is 640 pixels by 480 pixels, miniISP can compress the image data to generate 320 pixels by 240 pixels to be processed image data. miniISP can also perform automatic exposure (AE) processing on the image data. In addition to the above processing methods, miniISP can also be used to automatically select the image to be processed in the image data according to the brightness information contained in the image data. For example, when miniISP determines that the current image is acquired in a low-light environment, because the IR image contains The low-light environment of the image has more detailed information than the RGB image, so the IR image in the image data is selected for processing.
- image resolution image resolution
- miniISP can compress the image data to generate 320 pixels by 240 pixels to be processed image data.
- miniISP can also perform automatic exposure (AE) processing on the image data.
- miniISP can also be used
- Step 303 is an optional step.
- the terminal device uses the data collected by the sensor and / or the to-be-processed data processed by the sensor processor to determine the corresponding target scenario according to the scenario recognition model.
- the scene recognition model runs on the coprocessor and the AI processor, and the AI algorithm in the scene recognition model runs on the AI processor.
- the direction and sequence of data flow in the scene recognition model are different, for example: for the image data to be processed generated by the miniISP based on the image data processing and the audio data generated by the ASP based on the audio data , First loaded into the AI algorithm running on the AI processor in the scenario recognition model, and then the coprocessor determines the target scenario according to the calculation result of the AI processor.
- the acceleration data collected and generated by the acceleration sensor is processed by the coprocessor first, and then loaded into the AI algorithm running on the AI processor in the scene recognition model. Finally, the coprocessor determines the target scenario according to the calculation result of the AI processor.
- the scene recognition model consists of two parts: the first part is the AI algorithm, which contains the data set collected by the sensor and the data set to be processed after processing by the sensor processor, and the neural network model is trained offline.
- the second part is to determine the target scenario based on the result of the AI algorithm operation, which is completed by the coprocessor.
- a convolutional neural network CNN
- a deep neural network DNN
- RNN recurrent neural network
- LSTM Long-short-term memory network
- CNN is a feed-forward neural network. Its artificial neurons can respond to the surrounding units in a part of the coverage area, and have excellent performance for large-scale image processing.
- the CNN consists of one or more convolutional layers and a fully connected layer at the top (corresponding to a classic neural network), and also includes associated weights and a pooling layer. This structure enables CNN to utilize the two-dimensional structure of the input data.
- the convolution kernel of the convolutional layer in the CNN will convolve the image. Convolution is to scan the image with a filter of a specific parameter to extract the feature value of the image.
- Offline training refers to model design and training on deep learning frameworks such as tensorflow, caffe (convolutional architecture for fast features).
- infrared image sensor Take infrared image sensor as an example.
- scene recognition models that can use infrared image data in the terminal device, for example: scan QR code scene recognition model, scanned code scene recognition model, and selfie scene recognition model, etc.
- scene recognition models can be applied in the terminal device The scene recognition model will be introduced separately below.
- the neural network model obtained by offline training loaded in the AI processor uses the CNN algorithm to collect 100,000 two-dimensional code images and 100,000 non-two-dimensional code images through the sensor, and Marked separately (with QR code or without QR code), after training on tensorflow, the neural network model and related parameters are obtained, and then the image data collected by the second infrared image sensor is input into the neural network model for network derivation , You can get the result of whether the image contains a two-dimensional code.
- the scan QR code scene recognition model can also identify the terminal Whether the image acquired by the device contains barcode and other results.
- the neural network model obtained by the offline training loaded in the AI processor uses the CNN algorithm to collect 100,000 images containing the scanning device and 100,000 images without the scanning code through the sensor
- the image of the device, the image containing the code scanning device is the image data collected by the sensor and includes the scanning part of the wearable device such as the code scanner, the code scanning gun, the smart phone, and the smart bracelet.
- the smart phone Taking the smart phone as an example, When the image contains the main image sensor part of the smartphone, the image is the image containing the code scanning device.
- the neural network model obtained by offline training in the AI processor uses the CNN algorithm to collect 100,000 images containing human faces and 100,000 images without human faces through sensors.
- the image containing the human face is an image containing part or all of the human face, and respectively labeled (with or without human face), after training on tensorflow, the neural network model and related parameters are obtained, and then the first infrared image sensor
- the collected image data is input into the neural network model for network derivation, and it can be obtained whether the image contains a face.
- audio data collected by the audio collector can also be used to determine the target scenario.
- image data, audio data, and acceleration data may be applied to determine whether the current scene of the terminal device is a sports scene. Use a variety of data to determine whether the current terminal device is in a driving scenario.
- the co-processor may determine the business processing method corresponding to the target scenario, or the co-processor may send the determined target scenario to the main processor for processing by the main processor
- the processor determines the business processing method corresponding to the target scenario.
- the business processing method is determined according to the driving scenario to start the driving mode of the terminal device and / or start the driving of the application program in the terminal device
- the mode function and / or the driving mode icon is displayed on the screen standby normal display area of the terminal device, wherein the driving mode of the terminal includes a navigation function and a voice assistant, and the driving mode icon is used to start the driving mode.
- Starting the driving mode of the terminal device and starting the driving mode function of the application program in the terminal device are steps performed by the main processor, and displaying a driving mode icon on the screen standby normal display area of the terminal device is a step performed by the coprocessor.
- a method for business processing uses a variety of sensors such as a traditional sensor, an infrared image sensor, and an audio collector to collect external multi-dimensional information, thereby improving the terminal device's perception capability.
- the AI processor is a dedicated chip optimized for the AI algorithm, the use of the AI processor by the terminal device can greatly increase the running speed of the AI algorithm and reduce the power consumption of the terminal device. Since the coprocessor runs on the always area of the terminal device and does not need to turn on the main processor to work, the scene recognition can still be performed even when the terminal device is in the off-screen state.
- the terminal device realizes the target scenario and the business processing method corresponding to the target scenario in different scenarios based on different embodiments.
- FIG. 4 is a schematic diagram of an embodiment of an application program intelligently started provided by an embodiment of the present application. Examples include:
- step 401 is similar to step 301 in FIG. 3 and will not be repeated here.
- step 402 is similar to step 302 in FIG. 3 and will not be repeated here.
- the sensor processor processes the data
- step 403 is similar to step 303 in FIG. 3 and will not be repeated here.
- the method for determining whether it is a target scenario based on the data collected by the sensor is similar to the method in step 304 in FIG. 3, and details are not described here.
- step 405 is entered; if the terminal device determines that the scenario in which the terminal device is located is not the target scenario based on the currently acquired data, step 401 is entered , Waiting to acquire and process the data collected by the next sensor.
- the terminal device after the terminal device determines the target scenario in which the current terminal device is located according to the data collected by the sensor, the terminal device can start the target application corresponding to the target scenario.
- the terminal device when the terminal device determines that the current scenario is a sports scenario, the terminal device can start a navigation application, such as Gaode Map, etc., and can also start a health monitoring application to monitor the physiological data of the terminal device user, and can also start music playback. App and play music automatically.
- a navigation application such as Gaode Map, etc.
- a health monitoring application to monitor the physiological data of the terminal device user
- music playback App and play music automatically.
- the terminal device when the terminal device obtains that the current image contains the code scanning device, the terminal device can determine that the current terminal device is in the scenario of scanning the two-dimensional code. Sensor and launch the home screen, such as the camera application. Or open the application with the QR code scanning function and further open the QR code scanning function in the application, for example, open the "scan" function in the browser application, where the "scan" function is used for scanning Two-dimensional code image and provide the scanned data to the browser for use.
- the terminal device when the terminal device obtains that the current image contains the code scanning device, the terminal device can determine that the current terminal device is in the scanned code scenario. At this time, the terminal device can open the application program with the two-dimensional code and / or bar code. After opening the home screen of the terminal device, the QR code and / or bar code of the application program is displayed on the home screen. For example, when the terminal device determines that the current image contains a barcode scanning device, it opens the home screen of the terminal device and displays the payment QR code and / or barcode of the payment application, which may be Alipay or WeChat.
- the terminal device can determine that the current terminal device is in a self-portrait scene. At this time, the terminal device can activate the secondary image sensor in the same plane as the main screen, and automatically turn on the secondary image sensor Applications such as the self-timer function in the camera application and start the home screen, displaying the self-timer function interface in the camera application on the home screen.
- the terminal device can automatically recognize the current scene based on the use of the infrared image sensor, and intelligently start the application corresponding to the target scene according to the recognized scene. Increased user convenience.
- FIG. 5 is a schematic diagram of an embodiment of an intelligent recommendation service provided by an embodiment of the present application, and an embodiment of an intelligent recommendation service provided by an embodiment of the present application include:
- step 501 is similar to step 301 in FIG. 3 and will not be repeated here.
- step 502 is similar to step 302 in FIG. 3 and will not be repeated here.
- the sensor processor processes the data
- step 503 is similar to step 303 in FIG. 3 and will not be repeated here.
- the method for determining whether it is a target scenario based on the data collected by the sensor is similar to the method in step 304 in FIG. 3, and details are not described here.
- step 505 is entered; if the terminal device determines that the scenario in which the terminal device is located is not the target scenario based on the currently acquired data, step 501 is entered , Waiting to acquire and process the data collected by the next sensor.
- the terminal device may recommend a target service corresponding to the target scenario.
- the specific methods of recommending target services are introduced below.
- the terminal device determines the target scenario that the terminal device is in, it can recommend the target service corresponding to the target scenario to the terminal device user, including displaying the function entrance of the target service in the always on display (AOD) area of the terminal device, In the AOD area of the terminal device, the program entry of the application included in the target service, the automatic start of the target service, and the automatic start of the application included in the target service are displayed.
- AOD always on display
- the terminal device may The mute icon is displayed in the area, and the terminal device can start the mute function by receiving the user's operation instruction on the mute icon.
- the mute function is to set the volume of all applications in the terminal device to 0.
- the terminal device can start the vibration function by receiving the user's operation instruction on the vibration icon.
- the vibration function is to set the volume of all applications in the terminal device. Set to 0 and set the alert sound of all applications in the terminal device to vibrate.
- the terminal device fails to receive the operation instruction of the corresponding icon in the AOD area for a period of time, such as 15 minutes, the terminal device may automatically activate the mute function or the vibration function.
- the terminal device may display the music playback application icon in the AOD area, and the terminal device may start the music playback application by receiving a user's operation instruction on the music playback application icon.
- the terminal device can perform service recommendation in a low power consumption state such as a screen off state, and can use various sensor data such as images, audio, and acceleration data as a basis for situational awareness data, and improve situational awareness through deep learning algorithms Accuracy. Increased user convenience.
- FIG. 6 is a schematic flowchart of an application scenario of a method for business processing in an embodiment of the present application, and a business in the embodiment of the present application
- Application scenarios of processing methods include:
- step S1 when the terminal device is connected to the peer device via Bluetooth, the user can mark whether the peer device currently connected via Bluetooth is a car. After the peer device is marked as a car, each time the terminal device is connected to the peer device via Bluetooth, the terminal device can confirm that the currently connected peer device is a car.
- the coprocessor in the AO area of the terminal device obtains the Bluetooth connection status of the terminal device at intervals of 10 seconds, generally 10 seconds;
- step S2 is it connected to the car Bluetooth
- the terminal device After the terminal device obtains the current Bluetooth connection status, it can know whether there is a peer device connected through Bluetooth in the current terminal device, and if there is a peer device connected with Bluetooth, it is further confirmed whether the current Bluetooth connected peer device has a car set by the user Logo, if the peer device has a car logo set by the user, you can confirm that the current terminal device is connected to the car Bluetooth, go to step S8, if the current terminal device Bluetooth status is not connected or the Bluetooth-connected peer device does not have the user set car logo , Then go to step S3;
- step S3 the terminal device obtains relevant data of the taxi software running in the terminal device, and confirms whether the current taxi software is started according to the relevant data of the taxi software, that is, whether the current user uses the taxi software. If it is confirmed that the current user uses the ride-hailing software according to the relevant data of the ride-hailing software, step S9 is entered, and if it is confirmed that the current user does not use the ride-hailing software according to the relevant data of the ride-hailing software, step S4 is entered;
- step S4 the terminal device uses an acceleration sensor and a gyroscope to collect acceleration data and angular velocity data, and pre-processes the collected acceleration data and angular velocity data, including: resampling the data, such as the original acceleration collected by the acceleration sensor
- the sampling rate of the data is 100 hertz (hz)
- the sampling rate of the acceleration data obtained after data resampling is 1 hertz.
- the sampling rate of the data obtained after the specific resampling is determined by the sampling rate of the neural network model applied in the scene recognition model The decision is generally consistent with the sample sampling rate.
- the RAM includes double-rate synchronous dynamic random access memory (double data) rate (DDR), DDR2, DDR3, DDR4, and future upcoming DDR5;
- DDR double-rate synchronous dynamic random access memory
- step S5 the scenario recognition model in the terminal device obtains the pre-processed acceleration data and angular velocity data stored in the RAM, and the scenario recognition model confirms whether the current terminal device is in a driving scenario according to the pre-processed acceleration data and angular velocity data. If yes, go to step S6, if no, go to step S9;
- step S6 after the terminal device confirms that the current terminal device is in the driving scenario based on the acceleration data and angular velocity data, since the scenario recognition results based on the acceleration data and angular velocity data are not highly reliable, further sensor data needs to be obtained for scenario recognition .
- the terminal device acquires the image data collected by the infrared image sensor and the audio data collected by the audio collector, and stores the collected image data and audio data in the RAM of the terminal device, or the collected image data and audio data go through miniISP After processing corresponding to ASP, store the processed image data and audio data in the RAM of the terminal device;
- step S7 the terminal device acquires the image data and audio data in the RAM, and loads the image data and audio data into the scenario recognition model to perform scenario recognition, and confirms whether the current terminal device is in a driving scenario based on the image data and audio data. If yes, go to step S8, if no, go to step S9;
- step S8 the terminal device displays a driving situation icon in the AOD area, the driving situation icon is the driving situation function entrance of the terminal device, and when the terminal device receives an operation instruction triggered by the user through the driving situation icon, the terminal device starts the driving situation Modes include: starting the navigation application, amplifying the font size of the terminal device display characters, and starting the voice operation assistant, which can control the operation of the terminal device according to the user's voice instructions, such as dialing the phone number according to the user's voice instructions operating;
- step S9 the terminal device ends the recognition operation of the driving scenario.
- an artificial intelligence algorithm is used to determine whether it is currently a driving scenario, which improves the recognition accuracy of driving scenarios.
- FIG. 7 is a schematic structural diagram of a computer system according to an embodiment of the present application.
- the computer system may be a terminal device.
- the computer system includes a communication module 710, a sensor 720, a user input module 730, an output module 740, a processor 750, an audio and video input module 760, a memory 770, and a power supply 780.
- the computer system provided in this embodiment may further include an AI processor 790.
- the communication module 710 may include at least one module that enables communication between the computer system and the communication system or other computer systems.
- the communication module 710 may include one or more of a wired network interface, a broadcast receiving module, a mobile communication module, a wireless Internet module, a local area communication module, and a location (or positioning) information module.
- the sensor 720 may sense the current state of the system, such as the open / closed state, position, whether there is contact with the user, direction, and acceleration / deceleration, and the sensor 720 may generate a sensing signal for controlling the operation of the system.
- the sensor 720 includes one or more of an infrared image sensor, an audio collector, an acceleration sensor, a gyroscope, an ambient light sensor, a proximity light sensor, and a geomagnetic sensor.
- the user input module 730 is used to receive input digital information, character information, or contact touch operation / contactless gestures, and receive signal input related to user settings and function control of the system.
- the user input module 730 includes a touch panel and / or other input devices.
- the output module 740 includes a display panel for displaying information input by the user, information provided to the user, various menu interfaces of the system, and the like.
- the display panel may be configured in the form of a liquid crystal display (liquid crystal) (LCD) or an organic light-emitting diode (OLED).
- the touch panel may cover the display panel to form a touch display screen.
- the output module 740 may also include an audio output module, an alarm, and a haptic module.
- the audio and video input module 760 is used to input audio signals or video signals.
- the audio and video input module 760 may include a camera and a microphone.
- the power supply 780 may receive external power and internal power under the control of the processor 750 and provide power required for the operation of various components of the system.
- the processor 750 includes one or more processors.
- the processor 750 is a main processor in the computer system.
- the processor 750 may include a central processor and a graphics processor.
- the central processor has multiple cores and belongs to a multi-core processor. The multiple cores can be integrated on the same chip, or they can be independent chips.
- the memory 770 stores computer programs, which include an operating system program 772, an application program 771, and the like.
- Typical operating systems such as Microsoft ’s Windows, Apple ’s MacOS, and other systems for desktops or notebooks, and Google ’s Android Systems such as systems for mobile terminals.
- the method provided in the foregoing embodiment may be implemented by software, and may be regarded as a specific implementation of the operating system program 772.
- the memory 770 may be one or more of the following types: flash memory, hard disk type memory, micro multimedia card memory, card memory (such as SD or XD memory), random access memory (random access memory) , RAM), static random access memory (static RAM, SRAM), read-only memory (read only memory, ROM), electrically erasable programmable read-only memory (electrically erasable programmable-read-only memory (EEPROM), programmable Read only memory (programmable ROM, PROM), rollback protected memory block (replay protected memory (RPMB), magnetic memory, magnetic disk or optical disk.
- the storage 770 may also be a network storage device on the Internet, and the system may perform operations such as updating or reading the storage 770 on the Internet.
- the processor 750 is used to read the computer program in the memory 770 and then execute the method defined by the computer program. For example, the processor 750 reads the operating system program 772 to run the operating system on the system and implement various functions of the operating system, or read Take one or more application programs 771 to run applications on the system.
- the memory 770 also stores other data 773 in addition to computer programs.
- the AI processor 790 is mounted on the processor 750 as a coprocessor, and is used to perform tasks assigned to it by the processor 750.
- the AI processor 790 may be called by the scene recognition model to implement some of the complex algorithms involved in scene recognition. Specifically, the AI algorithm of the scene recognition model runs on multiple cores of the processor 750, and then the processor 750 calls the AI processor 790, and the result realized by the AI processor 790 is returned to the processor 750.
- connection relationship of the above modules is only an example, and the method provided in any embodiment of the present application may also be applied to terminal devices of other connection methods, for example, all modules are connected through a bus.
- the processor 750 included in the terminal device further has the following functions:
- Obtain data to be processed wherein the data to be processed is generated by data collected by a sensor, the sensor includes at least an infrared image sensor, and the data to be processed includes at least an image to be processed generated from image data collected by the infrared image sensor data;
- the target scenario corresponding to the to-be-processed data is determined through a scenario recognition model, where the scenario recognition model is obtained by training the sensor data set and the scenario type set;
- the processor 750 is specifically used to perform the following steps:
- the target scenario corresponding to the data to be processed is determined by the AI algorithm in the scenario recognition model, where the AI algorithm includes a deep learning algorithm, and the AI algorithm runs in the AI processor 790.
- the processor 750 is specifically used to perform the following steps:
- the sensor further includes at least one of an audio collector and a first sub-sensor, and the to-be-processed data includes at least one of to-be-processed audio data and first to-be-processed sub-data, wherein The audio data collected by the collector is generated, and the first sub-data to be processed is generated by the first sub-sensor data collected by the first sub-sensor.
- the processor 750 is specifically used to perform the following steps:
- the processor 750 further includes at least one of an image signal processor, an audio signal processor, and the first sub-sensor processor,
- the image signal processor is used to obtain image data through the infrared image sensor when the image acquisition preset running time is reached, wherein the image data is the data collected by the infrared image sensor;
- the AI processor 790 is specifically configured to obtain the image data to be processed through the image signal processor, wherein the image data to be processed is generated by the image signal processor according to the image data;
- the audio signal processor is used to obtain the audio data through the audio collector when the preset operation time of the audio acquisition is reached;
- the AI processor 790 is specifically configured to obtain the to-be-processed audio data through the audio signal processor, wherein the to-be-processed audio data is generated by the audio signal processor according to the audio data;
- the first sub-sensor processor is configured to acquire first sub-sensor data through the first sub-sensor when the first preset running time is reached, wherein the first sub-sensor data is data collected by the first sub-sensor ;
- the coprocessor is specifically used to obtain the first to-be-processed sub-data through the first sub-sensor processor, wherein the first to-be-processed sub-data is generated by the first sub-sensor processor according to the first sub-sensor data.
- the processor 750 is specifically used to perform the following steps:
- the coprocessor is specifically used to determine that the business processing method is to activate the main image sensor of the terminal device and / or enable scanning support in the terminal device if the target scenario is a QR code scanning scenario QR code function application.
- the processor 750 is specifically used to perform the following steps:
- the coprocessor is specifically used to determine that the business processing method is to activate the mute mode of the terminal device and / or to activate the mute function of the application program in the terminal device and / or if the target scenario is a conference scenario
- a mute mode icon is displayed on the screen standby normal display area of the terminal device, where the mute mode icon is used to activate the mute mode.
- the processor 750 is specifically used to perform the following steps:
- the coprocessor is specifically used to determine that the business processing method is to start the sport mode of the terminal device and / or start the sport mode function of the application in the terminal device according to the sport scenario if the target scenario is a sport scenario and / or Or a music playback icon is displayed on the screen standby normal display area of the terminal device, wherein the motion mode of the terminal device includes a step counting function, and the music playback icon is used to start or pause music playback.
- the processor 750 is specifically used to perform the following steps:
- the coprocessor is specifically used to determine that the business processing method is to start the driving mode of the terminal device and / or start the driving mode function of the application in the terminal device if the target scenario is a driving scenario and / or Or a driving mode icon is displayed on the screen standby normal display area of the terminal device, where the driving mode of the terminal device includes a navigation function and a voice assistant, and the driving mode icon is used to activate the driving mode.
- FIG. 8 is a schematic structural diagram of an AI processor provided by an embodiment of the present application.
- the AI processor 800 is connected to the main processor and external memory.
- the core part of the AI processor 800 is an arithmetic circuit 803, and the arithmetic circuit 803 is controlled by the controller 804 to extract data in the memory and perform mathematical operations.
- the arithmetic circuit 803 internally includes multiple processing engines (process engines, PE). In some implementations, the arithmetic circuit 803 is a two-dimensional pulsating array. The arithmetic circuit 803 may also be a one-dimensional pulsating array or other electronic circuit capable of performing mathematical operations such as multiplication and addition. In other implementations, the arithmetic circuit 803 is a general-purpose matrix processor.
- the arithmetic circuit 803 takes the data corresponding to the matrix B from the weight memory 802 and caches it on each PE of the arithmetic circuit 803.
- the operation circuit 803 takes matrix A data and matrix B from the input memory 801 to perform matrix operation, and the partial result or final result of the obtained matrix is stored in an accumulator 808.
- the unified memory 806 is used to store input data and output data.
- the weight data is directly transferred to the weight memory 802 through the storage unit access controller 805 (for example, direct memory access controller (DMAC)).
- the input data is also transferred to the unified memory 806 through the storage unit access controller 805.
- DMAC direct memory access controller
- the bus interface unit 810 (bus interface unit, BIU) is used for the interaction between the AXI (advanced extended interface) bus and the storage unit access controller 805 and the instruction fetch memory 809 (instruction fetch buffer).
- the bus interface unit 810 is used to fetch the instruction memory 809 to obtain instructions from the external memory, and also used by the storage unit access controller 805 to obtain the original data of the input matrix A or the weight matrix B from the external memory.
- the storage unit access controller 805 is mainly used to carry the input data in the external memory to the unified memory 806 or the weight data to the weight memory 802 or the input data data to the input memory 801.
- the vector calculation unit 807 usually includes a plurality of operation processing units. If necessary, the output of the operation circuit 803 is further processed, such as vector multiplication, vector addition, exponential operation, logarithm operation, and / or size comparison, etc.
- the vector calculation unit 807 can store the processed vector in the unified memory 806.
- the vector calculation unit 807 may apply a non-linear function to the output of the arithmetic circuit 803, such as a vector of accumulated values, to generate an activation value.
- the vector calculation unit 807 generates normalized values, merged values, or both.
- the processed vector can be used as the activation input of the arithmetic circuit 803.
- the fetch memory 809 connected to the controller 804 is used to store instructions used by the controller 804.
- the unified memory 806, the input memory 801, the weight memory 802, and the fetch memory 809 are all On-Chip memories.
- the external memory in the figure is independent of the AI processor hardware architecture.
- FIG. 9 is a schematic diagram of an embodiment of the service processing device in the embodiment of the present application. include:
- the obtaining unit 901 is configured to obtain data to be processed, wherein the data to be processed is generated from data collected by a sensor, the sensor includes at least an infrared image sensor, and the data to be processed includes at least an image collected by the infrared image sensor Image data to be processed generated by the data;
- the determining unit 902 is configured to determine a target scenario corresponding to the data to be processed through a scenario recognition model, where the scenario recognition model is obtained by training the sensor data set and the scenario type set;
- the determining unit 902 is also used to determine the business processing mode according to the target scenario.
- the acquiring unit 901 is configured to acquire data to be processed, wherein the data to be processed is generated from data collected by a sensor, the sensor includes at least an infrared image sensor, and the data to be processed includes at least the infrared image To-be-processed image data generated from the image data collected by the sensor; a determining unit 902, configured to determine a target scenario corresponding to the to-be-processed data through a scenario recognition model, where the scenario recognition model is training for the sensor data set and the scenario type set Obtained; the determination unit 902 is also used to determine the business processing mode according to the target scenario.
- the terminal device collects data through a sensor deployed inside the terminal device or connected to the terminal device, the sensor includes at least an infrared image sensor, and generates data to be processed according to the collected data, and the data to be processed includes at least Image data to be processed generated from the image data collected by the infrared image sensor.
- the terminal device obtains the data to be processed, it can determine the target scenario corresponding to the data to be processed through the scenario recognition model.
- the scenario recognition model is obtained by training the data collection collected by the sensor and the scenario type set corresponding to different data offline. The next training is to use deep learning framework for model design and training.
- the terminal device determines the current target scenario, it can determine the corresponding business processing method according to the target scenario.
- the target scenario of the current terminal device can be determined, and the corresponding business processing method can be determined according to the target scenario. To improve user convenience.
- the determining unit 902 is specifically configured to determine the target scenario corresponding to the data to be processed through the AI algorithm in the scenario recognition model, where the AI algorithm includes a deep learning algorithm, and the AI algorithm runs in the AI processor.
- the terminal device specifically uses the AI algorithm in the context recognition model to determine the target scenario corresponding to the data to be processed.
- the AI algorithm includes a deep learning algorithm, which runs on the AI processor in the terminal device.
- the processor has powerful parallel computing capabilities and high efficiency when running AI algorithms. Therefore, the scene recognition model uses the AI algorithm to determine the specific target scenario.
- the AI algorithm runs on the AI processor in the terminal device, which improves the scene recognition. The efficiency further improves the user's convenience.
- the sensor further includes at least one of an audio collector and a first sub-sensor, and the to-be-processed data includes at least one of to-be-processed audio data and first to-be-processed sub-data, wherein the to-be-processed audio data is composed of the audio
- the audio data collected by the collector is generated, and the first sub-data to be processed is generated by the first sub-sensor data collected by the first sub-sensor.
- the sensor deployed in the terminal device also includes one of an audio collector and a first sub-sensor.
- the first sub-sensor may be an acceleration sensor, a gyroscope, or an ambient light sensor. , One or more sensors such as proximity light sensor and geomagnetic sensor.
- the audio collector collects audio data, and then processes the terminal device to generate audio data to be processed.
- the first sub-sensor data is collected by the first sub-sensor, and processed by the terminal device to generate first sub-sensor data to be processed.
- the terminal equipment uses multiple sensors to collect data in multiple dimensions, which improves the accuracy of scene recognition.
- the acquisition unit 901 is specifically configured to acquire image data through the infrared image sensor when the preset operation time of image acquisition is reached, wherein the image data is data collected by the infrared image sensor;
- the acquiring unit 901 is specifically configured to acquire the image data to be processed through an image signal processor, wherein the image data to be processed is generated by the image signal processor according to the image data;
- the acquiring unit 901 is specifically configured to acquire the audio data through the audio collector when the preset time for audio collection is reached;
- the acquiring unit 901 is specifically configured to acquire the to-be-processed audio data through an audio signal processor, wherein the to-be-processed audio data is generated by the audio signal processor according to the audio data;
- the acquiring unit 901 is specifically configured to acquire first sub-sensor data through the first sub-sensor when the first preset running time is reached, wherein the first sub-sensor data is acquired by the first sub-sensor The data;
- the acquiring unit 901 is specifically configured to acquire the first to-be-processed sub-data through the first sub-sensor processor, wherein the first to-be-processed sub-data is generated by the first sub-sensor processor according to the first sub-sensor data.
- one or more of the infrared image sensor, the audio collector, and the first sub-sensor can respectively collect data corresponding to the sensor after reaching their respective preset runtimes, and obtain original data
- the terminal device uses the processor corresponding to the sensor to process the original sensor data to generate the sensor data to be processed.
- the sensor is started to collect data regularly, and the collected raw data can be processed by the processor corresponding to the sensor.
- the cache space occupied by the scene recognition model is reduced, the power consumption of the scene recognition model is reduced, and the standby time of the terminal device is improved.
- the determining unit 902 is specifically configured to determine that the target processing scenario is a scanning QR code scenario, and the determining unit 902 determines that the service processing mode is to activate the main image sensor of the terminal device according to the scanning QR code scenario. / Or start an application program in the terminal device that supports the function of scanning a QR code.
- the terminal device determines that the target scenario corresponding to the data collected by the sensor is the scan QR code scenario according to the data collected by one or more sensors in the terminal device, and determines and scans the QR code.
- Corresponding business processing methods include starting the main image sensor in the terminal device, the terminal device can use the main image sensor to scan the QR code, and the terminal device can also start an application that supports the function of scanning the QR code, such as starting the application WeChat and Turn on the QR code scanning function in WeChat. You can start the main image sensor and the application that supports the QR code scanning function at the same time, or you can start the main image sensor or the application that supports the QR code scanning according to a preset command or receive a user's instruction, which is not limited here.
- the terminal device uses the data collected by the multi-dimensional sensor, and determines the target scenario as the scanning QR code scenario through the scenario recognition model. It can automatically execute related business processing methods, which improves the intelligence of the terminal device and enhances the user's convenient operation. Sex.
- the determining unit 902 is specifically configured to, if the determining unit 902 determines that the target scenario is a conference scenario, the determining unit 902 determines that the service processing mode is to activate the mute mode of the terminal device and / or activate the terminal device according to the conference scenario.
- the terminal device when the terminal device determines that the target scenario corresponding to the data collected by the sensor is the conference scenario according to the data collected by one or more sensors in the terminal device, the business processing method corresponding to the conference scenario is determined , Including the silent mode for starting the terminal device.
- the terminal device When the terminal device is in the silent mode, all applications running in the terminal device are in the silent state.
- the terminal device can also start the silent function of the applications running in the terminal device, such as starting the application program.
- the mute function of WeChat At this time, the prompt sound of WeChat is switched to mute, and the mute mode icon can also be displayed on the terminal standby display area of the terminal device.
- the terminal device can receive the user's mute operation instruction through the mute mode icon.
- the terminal device responds to this The mute operation instruction starts the mute mode.
- the terminal device uses the data collected by the multi-dimensional sensors, and determines the target scenario as the conference scenario through the scenario recognition model. It can automatically execute related business processing methods, which improves the intelligence of the terminal device and enhances the user's convenience of operation.
- the determining unit 902 is specifically configured to, if the determining unit 902 determines that the target scenario is a sports scenario, the determining unit 902 determines that the business processing mode is to start the motion mode of the terminal device and / or to start the terminal device according to the sports scenario
- the terminal device determines that the target scenario corresponding to the data collected by the sensor is a sports scenario based on the data collected by one or more sensors in the terminal device
- the business processing method corresponding to the sports scenario is determined It includes the motion mode for starting the terminal device.
- the terminal device starts the pedometer application and the physiological data monitoring application.
- the terminal device can also start the motion mode function of the application program in the terminal device, for example, the motion function of the application NetEase Cloud Music.
- the playback mode of NetEase Cloud Music is the sports mode, and it can also be displayed in the standby display area of the screen of the terminal device
- the music playing icon the terminal device can receive the user's music playing instruction through the music playing icon, and the terminal device starts playing or pauses playing music in response to the music playing instruction.
- the terminal equipment uses the data collected by the multi-dimensional sensors to determine the target scene as a sports scene through the scene recognition model, and can automatically execute the relevant business processing methods, which improves the intelligence of the terminal equipment and enhances the user's convenience of operation.
- the determining unit 902 is specifically configured to determine that the business processing method is to start the driving mode of the terminal device and / or start the terminal device according to the driving scenario if the determining unit 902 determines that the target scenario is a driving scenario
- the terminal device determines, according to the data collected by one or more sensors in the terminal device, that the target scenario corresponding to the data collected by the sensor is the driving scenario, and determines the business processing method corresponding to the driving scenario It includes the driving mode for starting the terminal device.
- the terminal device starts the voice assistant.
- the terminal device can perform related operations according to the voice instructions input by the user.
- the terminal device can also start the navigation function.
- the terminal device can also start the driving mode function of the application program in the terminal device, for example, the driving function of the application Gaode map.
- the navigation mode of NetEase Cloud Music is the driving mode, and it can also be displayed in the screen standby normal display area of the terminal device
- the driving mode icon the terminal device can receive the user's driving mode instruction through the driving mode icon, and the terminal device starts the driving mode in response to the driving mode instruction.
- the terminal device uses the data collected by the multi-dimensional sensors, and after determining the target scenario as the driving scenario through the scenario recognition model, it can automatically execute related business processing methods, which improves the intelligence of the terminal device and improves the user's convenience of operation.
- the disclosed system, device, and method may be implemented in other ways.
- the device embodiments described above are only schematic.
- the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
- the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the above integrated unit may be implemented in the form of hardware or software functional unit.
- the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
- the technical solution of the present application essentially or part of the contribution to the existing technology or all or part of the technical solution can be embodied in the form of a software product, the computer software product is stored in a storage medium , Including several instructions to enable a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program code .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Human Computer Interaction (AREA)
- Electromagnetism (AREA)
- Toxicology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- User Interface Of Digital Computer (AREA)
- Library & Information Science (AREA)
- Telephone Function (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)
- Electrical Discharge Machining, Electrochemical Machining, And Combined Machining (AREA)
- Hardware Redundancy (AREA)
Abstract
Description
Claims (28)
- 一种业务处理的方法,其特征在于,所述方法应用于终端设备,所述方法包括:获取待处理数据,其中,所述待处理数据由传感器采集得到的数据生成,所述传感器中包括图像传感器,所述待处理数据中包括由所述图像传感器采集得到的图像数据生成的待处理图像数据;通过情景识别模型确定所述待处理数据所对应的目标情景,其中,所述情景识别模型为传感数据集合以及情景类型集合训练得到的;根据所述目标情景确定业务处理方式。
- 根据权利要求1所述的方法,其特征在于,所述通过所述情景识别模型确定所述待处理数据所对应的所述目标情景,包括:通过所述情景识别模型中的AI算法确定所述待处理数据所对应的所述目标情景,其中,所述AI算法包含深度学习算法,所述AI算法运行于AI处理器中。
- 根据权利要求2所述的方法,其特征在于,所述传感器中至少还包含音频采集器以及第一子传感器中的一个,所述待处理数据中至少包含待处理音频数据以及第一待处理子数据中的一个,其中,所述待处理音频数据由所述音频采集器采集得到的音频数据生成,所述第一待处理子数据由所述第一子传感器采集得到的第一子传感器数据生成。
- 根据权利要求3所述的方法,其特征在于,所述获取所述待处理数据,包括:当到达图像采集预设运行时间时,通过所述图像传感器获取图像数据,其中所述图像数据为所述图像传感器采集得到的数据;通过图像信号处理器获取所述待处理图像数据,其中,所述待处理图像数据由所述图像信号处理器根据所述图像数据生成;和/或当到达音频采集预设运行时间时,通过所述音频采集器获取所述音频数据;通过音频信号处理器获取所述待处理音频数据,其中,所述待处理音频数据由所述音频信号处理器根据所述音频数据生成;和/或当到达第一预设运行时间时,通过所述第一子传感器获取第一子传感器数据,其中所述第一子传感器数据为所述第一子传感器采集得到的数据;通过第一子传感器处理器获取所述第一待处理子数据,其中,所述第一待处理子数据由所述第一子传感器处理器根据所述第一子传感器数据生成。
- 根据权利要求1至4中任一项所述的方法,其特征在于,根据所述目标情景确定所述业务处理方式,包括:若所述目标情景为扫描二维码情景,则根据所述扫描二维码情景确定所述业务处理方式为启动所述终端设备主图像传感器和/或启动所述终端设备中支持扫描二维码功能的应用程序。
- 根据权利要求1至4中任一项所述的方法,其特征在于,根据所述目标情景确定所述业务处理方式,包括:若所述目标情景为会议情景,则根据所述会议情景确定所述业务处理方式为启动所述终端设备的静音模式和/或启动所述终端设备中应用程序的静音功能和/或在所述终端设备的屏幕待机常显区显示静音模式图标,其中所述静音模式图标用于启动所述的静音模式。
- 根据权利要求1至4中任一项所述的方法,其特征在于,根据所述目标情景确定所述业务处理方式,包括:若所述目标情景为运动情景,则根据所述运动情景确定所述业务处理方式为启动所述终端设备的运动模式和/或启动所述终端设备中应用程序的运动模式功能和/或在所述终端设备的屏幕待机常显区显示音乐播放图标,其中,所述终端设备的运动模式包括计步功能,所述音乐播放图标用于开始播放或暂停播放音乐。
- 根据权利要求1至4中任一项所述的方法,其特征在于,根据所述目标情景确定所述业务处理方式,包括:若所述目标情景为驾驶情景,则根据所述驾驶情景确定所述业务处理方式为启动所述终端设备的驾驶模式和/或启动所述终端设备中应用程序的驾驶模式功能和/或在所述终端设备的屏幕待机常显区显示驾驶模式图标,其中,所述终端设备的驾驶模式包括导航功能以及语音助手,所述驾驶模式图标用于启动所述驾驶模式。
- 一种终端设备,其特征在于,包括:传感器、处理器,所述传感器中至少包含图像传感器;所述处理器,用于获取待处理数据,其中,所述待处理数据由所述传感器采集得到的数据生成,所述待处理数据中至少包含由所述图像传感器采集得到的图像数据生成的待处理图像数据;所述处理器,还用于通过情景识别模型确定所述待处理数据所对应的目标情景,其中,所述情景识别模型为所述传感器获取的传感数据集合以及情景类型集合训练得到的;所述处理器,还用于根据所述目标情景确定业务处理方式。
- 根据权利要求9所述的终端设备,其特征在于,所述处理器中还包含协处理器以及AI处理器,所述处理器,具体用于通过所述情景识别模型中的AI算法确定所述待处理数据所对应的所述目标情景,其中,所述AI算法包含深度学习算法,所述AI算法运行于所述AI处理器中。
- 根据权利要求10所述的终端设备,其特征在于,所述传感器中还包含音频采集器以及第一子传感器中的至少一个。
- 根据权利要求11所述的终端设备,其特征在于,所述处理器还包含图像信号处理器、音频信号处理器以及所述第一子传感器处理器中的至少一个,所述图像信号处理器,用于当到达图像采集预设运行时间时,通过所述图像传感器获取图像数据,其中所述图像数据为所述图像传感器采集得到的数据;所述AI处理器,具体用于通过所述图像信号处理器获取所述待处理图像数据,其中,所述待处理图像数据由所述图像信号处理器根据所述图像数据生成;和/或所述音频信号处理器,用于当到达音频采集预设运行时间时,通过所述音频采集器获 取所述音频数据;所述AI处理器,具体用于通过所述音频信号处理器获取所述待处理音频数据,其中,所述待处理音频数据由所述音频信号处理器根据所述音频数据生成;和/或所述第一子传感器处理器,用于当到达第一预设运行时间时,通过所述第一子传感器获取第一子传感器数据,其中所述第一子传感器数据为所述第一子传感器采集得到的数据;所述协处理器,具体用于通过第一子传感器处理器获取所述第一待处理子数据,其中,所述第一待处理子数据由所述第一子传感器处理器根据所述第一子传感器数据生成。
- 根据权利要求9至12中任一项所述的终端设备,其特征在于,所述协处理器,具体用于若所述目标情景为扫描二维码情景,则根据所述扫描二维码情景确定所述业务处理方式为启动所述终端设备主图像传感器和/或启动所述终端设备中支持扫描二维码功能的应用程序。
- 根据权利要求9至12中任一项所述的终端设备,其特征在于,所述协处理器,具体用于若所述目标情景为会议情景,则根据所述会议情景确定所述业务处理方式为启动所述终端设备的静音模式和/或启动所述终端设备中应用程序的静音功能和/或在所述终端设备的屏幕待机常显区显示静音模式图标,其中所述静音模式图标用于启动所述的静音模式。
- 根据权利要求9至12中任一项所述的终端设备,其特征在于,所述协处理器,具体用于若所述目标情景为运动情景,则根据所述运动情景确定所述业务处理方式为启动所述终端设备的运动模式和/或启动所述终端设备中应用程序的运动模式功能和/或在所述终端设备的屏幕待机常显区显示音乐播放图标,其中,所述终端设备的运动模式包括计步功能,所述音乐播放图标用于开始播放或暂停播放音乐。
- 根据权利要求9至12中任一项所述的终端设备,其特征在于所述协处理器,具体用于若所述目标情景为驾驶情景,则根据所述驾驶情景确定所述业务处理方式为启动所述终端设备的驾驶模式和/或启动所述终端设备中应用程序的驾驶模式功能和/或在所述终端设备的屏幕待机常显区显示驾驶模式图标,其中,所述终端设备的驾驶模式包括导航功能以及语音助手,所述驾驶模式图标用于启动所述驾驶模式。
- 一种业务处理装置,其特征在于,所述业务处理装置应用于终端设备,包括:获取单元,用于获取待处理数据,其中,所述待处理数据由传感器采集得到的数据生成,所述传感器中至少包含图像传感器,所述待处理数据中至少包含由所述图像传感器采集得到的图像数据生成的待处理图像数据;确定单元,用于通过情景识别模型确定所述待处理数据所对应的目标情景,其中,所述情景识别模型为传感数据集合以及情景类型集合训练得到的;所述确定单元,还用于根据所述目标情景确定业务处理方式。
- 根据权利要求17所述的业务处理装置,其特征在于,包括:所述确定单元,具体用于通过所述情景识别模型中的AI算法确定所述待处理数据所对应的所述目标情景,其中,所述AI算法包含深度学习算法,所述AI算法运行于AI处理器中。
- 根据权利要求18所述的业务处理装置,其特征在于,所述传感器中至少还包含音频采集器以及第一子传感器中的一个,所述待处理数据中至少包含待处理音频数据以及第一待处理子数据中的一个,其中,所述待处理音频数据由所述音频采集器采集得到的音频数据生成,所述第一待处理子数据由所述第一子传感器采集得到的第一子传感器数据生成。
- 根据权利要求19所述的业务处理装置,其特征在于,包括:所述获取单元,具体用于当到达图像采集预设运行时间时,所述获取单元通过所述图像传感器获取图像数据,其中所述图像数据为所述图像传感器采集得到的数据;所述获取单元,具体用于通过图像信号处理器获取所述待处理图像数据,其中,所述待处理图像数据由所述图像信号处理器根据所述图像数据生成;和/或所述获取单元,具体用于当到达音频采集预设运行时间时,所述获取单元通过所述音频采集器获取所述音频数据;所述获取单元,具体用于通过音频信号处理器获取所述待处理音频数据,其中,所述待处理音频数据由所述音频信号处理器根据所述音频数据生成;和/或所述获取单元,具体用于当到达第一预设运行时间时,所述获取单元通过所述第一子传感器获取第一子传感器数据,其中所述第一子传感器数据为所述第一子传感器采集得到的数据;所述获取单元,具体用于通过第一子传感器处理器获取所述第一待处理子数据,其中,所述第一待处理子数据由所述第一子传感器处理器根据所述第一子传感器数据生成。
- 根据权利要求17至20中任一项所述的业务处理装置,其特征在于,包括:所述确定单元,具体用于若所述确定单元确定所述目标情景为扫描二维码情景,则所述确定单元根据所述扫描二维码情景确定所述业务处理方式为启动所述终端设备主图像传感器和/或启动所述终端设备中支持扫描二维码功能的应用程序。
- 根据权利要求17至20中任一项所述的业务处理装置,其特征在于,包括:所述确定单元,具体用于若所述确定单元确定所述目标情景为会议情景,则所述确定单元根据所述会议情景确定所述业务处理方式为启动所述终端设备的静音模式和/或启动所述终端设备中应用程序的静音功能和/或在所述终端设备的屏幕待机常显区显示静音模式图标,其中所述静音模式图标用于启动所述的静音模式。
- 根据权利要求17至20中任一项所述的业务处理装置,其特征在于,包括:所述确定单元,具体用于若所述确定单元确定所述目标情景为运动情景,则所述确定单元根据所述运动情景确定所述业务处理方式为启动所述终端设备的运动模式和/或启动所述终端设备中应用程序的运动模式功能和/或在所述终端设备的屏幕待机常显区显示音乐播放图标,其中,所述终端设备的运动模式包括计步功能,所述音乐播放图标用于开始播放或暂停播放音乐。
- 根据权利要求17至20中任一项所述的业务处理装置,其特征在于,包括:所述确定单元,具体用于若所述确定单元确定所述目标情景为驾驶情景,则所述确定 单元根据所述驾驶情景确定所述业务处理方式为启动所述终端设备的驾驶模式和/或启动所述终端设备中应用程序的驾驶模式功能和/或在所述终端设备的屏幕待机常显区显示驾驶模式图标,其中,所述终端设备的驾驶模式包括导航功能以及语音助手,所述驾驶模式图标用于启动所述驾驶模式。
- 一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行如权利要求1至8任意一项所述的方法。
- 一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行如权利要求1至8任意一项所述的方法。
- 一种业务处理的方法,其特征在于,所述方法应用于终端设备,所述终端设备上配置有常开的图像传感器,所述方法包括:获取数据,其中,所述数据包括所述图像传感器采集到的图像数据;通过情景识别模型确定所述数据所对应的目标情景,其中,所述情景识别模型为传感数据集合以及情景类型集合训练得到的;根据所述目标情景确定业务处理方式。
- 一种终端设备,其特征在于,所述终端设备上配置有常开的图像传感器,所述终端设备用于实现如权利要求1-8、以及27中任意一项所述的方法。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020217002422A KR20210022740A (ko) | 2018-11-21 | 2019-05-09 | 서비스 처리 방법 및 관련 장치 |
AU2019385776A AU2019385776B2 (en) | 2018-11-21 | 2019-05-09 | Service processing method and related apparatus |
EP19874765.1A EP3690678A4 (en) | 2018-11-21 | 2019-05-09 | SERVICE PROCESSING METHODS AND RELATED DEVICE |
CA3105663A CA3105663C (en) | 2018-11-21 | 2019-05-09 | Service processing method and related apparatus |
JP2021506473A JP7186857B2 (ja) | 2018-11-21 | 2019-05-09 | サービス処理方法および関連装置 |
US16/992,427 US20200372250A1 (en) | 2018-11-21 | 2020-08-13 | Service Processing Method and Related Apparatus |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811392818.7A CN111209904A (zh) | 2018-11-21 | 2018-11-21 | 一种业务处理的方法以及相关装置 |
CN201811392818.7 | 2018-11-21 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/992,427 Continuation US20200372250A1 (en) | 2018-11-21 | 2020-08-13 | Service Processing Method and Related Apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020103404A1 true WO2020103404A1 (zh) | 2020-05-28 |
Family
ID=70773748
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/086127 WO2020103404A1 (zh) | 2018-11-21 | 2019-05-09 | 一种业务处理的方法以及相关装置 |
Country Status (8)
Country | Link |
---|---|
US (1) | US20200372250A1 (zh) |
EP (1) | EP3690678A4 (zh) |
JP (1) | JP7186857B2 (zh) |
KR (1) | KR20210022740A (zh) |
CN (1) | CN111209904A (zh) |
AU (1) | AU2019385776B2 (zh) |
CA (1) | CA3105663C (zh) |
WO (1) | WO2020103404A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210056220A1 (en) * | 2019-08-22 | 2021-02-25 | Mediatek Inc. | Method for improving confidentiality protection of neural network model |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2021122106A (ja) * | 2020-01-31 | 2021-08-26 | キヤノン株式会社 | 撮像装置、学習装置、撮像装置の制御方法、学習方法、学習済みモデルおよびプログラム |
CN112507356B (zh) * | 2020-12-04 | 2023-01-03 | 上海易校信息科技有限公司 | 一种基于Angular的集中式前端ACL权限控制方法 |
CN112862479A (zh) * | 2021-01-29 | 2021-05-28 | 中国银联股份有限公司 | 一种基于终端姿态的业务处理方法及装置 |
CN113051052B (zh) * | 2021-03-18 | 2023-10-13 | 北京大学 | 物联网系统按需设备调度规划方法与系统 |
CN113194211B (zh) * | 2021-03-25 | 2022-11-15 | 深圳市优博讯科技股份有限公司 | 一种扫描头的控制方法及系统 |
CN117453105A (zh) * | 2021-09-27 | 2024-01-26 | 荣耀终端有限公司 | 退出二维码的方法和装置 |
CN113935349A (zh) * | 2021-10-18 | 2022-01-14 | 交互未来(北京)科技有限公司 | 一种扫描二维码的方法、装置、电子设备及存储介质 |
CN113900577B (zh) * | 2021-11-10 | 2024-05-07 | 杭州逗酷软件科技有限公司 | 一种应用程序控制方法、装置、电子设备及存储介质 |
KR102599078B1 (ko) | 2023-03-21 | 2023-11-06 | 고아라 | 큐티클 케어 세트 및 이를 이용한 큐티클 케어 방법 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110153617A1 (en) * | 2009-12-18 | 2011-06-23 | Toyota Motor Engineering & Manufacturing North America, Inc. | Method and system for describing and organizing image data |
CN107402964A (zh) * | 2017-06-22 | 2017-11-28 | 深圳市金立通信设备有限公司 | 一种信息推荐方法、服务器及终端 |
CN107786732A (zh) * | 2017-09-28 | 2018-03-09 | 努比亚技术有限公司 | 终端应用推送方法、移动终端及计算机可读存储介质 |
CN108322609A (zh) * | 2018-01-31 | 2018-07-24 | 努比亚技术有限公司 | 一种通知信息调控方法、设备及计算机可读存储介质 |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8756173B2 (en) * | 2011-01-19 | 2014-06-17 | Qualcomm Incorporated | Machine learning of known or unknown motion states with sensor fusion |
US8892162B2 (en) * | 2011-04-25 | 2014-11-18 | Apple Inc. | Vibration sensing system and method for categorizing portable device context and modifying device operation |
PL398136A1 (pl) * | 2012-02-17 | 2013-08-19 | Binartech Spólka Jawna Aksamit | Sposób wykrywania kontekstu urzadzenia przenosnego i urzadzenie przenosne z modulem wykrywania kontekstu |
WO2014020604A1 (en) * | 2012-07-31 | 2014-02-06 | Inuitive Ltd. | Multiple sensors processing system for natural user interface applications |
CN104268547A (zh) * | 2014-08-28 | 2015-01-07 | 小米科技有限责任公司 | 一种基于图片内容播放音乐的方法及装置 |
CN115690558A (zh) * | 2014-09-16 | 2023-02-03 | 华为技术有限公司 | 数据处理的方法和设备 |
US9633019B2 (en) * | 2015-01-05 | 2017-04-25 | International Business Machines Corporation | Augmenting an information request |
CN105138963A (zh) * | 2015-07-31 | 2015-12-09 | 小米科技有限责任公司 | 图片场景判定方法、装置以及服务器 |
JP6339542B2 (ja) * | 2015-09-16 | 2018-06-06 | 東芝テック株式会社 | 情報処理装置及びプログラム |
JP6274264B2 (ja) * | 2016-06-29 | 2018-02-07 | カシオ計算機株式会社 | 携帯端末装置及びプログラム |
WO2018084577A1 (en) * | 2016-11-03 | 2018-05-11 | Samsung Electronics Co., Ltd. | Data recognition model construction apparatus and method for constructing data recognition model thereof, and data recognition apparatus and method for recognizing data thereof |
US10592199B2 (en) * | 2017-01-24 | 2020-03-17 | International Business Machines Corporation | Perspective-based dynamic audio volume adjustment |
-
2018
- 2018-11-21 CN CN201811392818.7A patent/CN111209904A/zh active Pending
-
2019
- 2019-05-09 EP EP19874765.1A patent/EP3690678A4/en active Pending
- 2019-05-09 JP JP2021506473A patent/JP7186857B2/ja active Active
- 2019-05-09 WO PCT/CN2019/086127 patent/WO2020103404A1/zh unknown
- 2019-05-09 CA CA3105663A patent/CA3105663C/en active Active
- 2019-05-09 KR KR1020217002422A patent/KR20210022740A/ko not_active Application Discontinuation
- 2019-05-09 AU AU2019385776A patent/AU2019385776B2/en active Active
-
2020
- 2020-08-13 US US16/992,427 patent/US20200372250A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110153617A1 (en) * | 2009-12-18 | 2011-06-23 | Toyota Motor Engineering & Manufacturing North America, Inc. | Method and system for describing and organizing image data |
CN107402964A (zh) * | 2017-06-22 | 2017-11-28 | 深圳市金立通信设备有限公司 | 一种信息推荐方法、服务器及终端 |
CN107786732A (zh) * | 2017-09-28 | 2018-03-09 | 努比亚技术有限公司 | 终端应用推送方法、移动终端及计算机可读存储介质 |
CN108322609A (zh) * | 2018-01-31 | 2018-07-24 | 努比亚技术有限公司 | 一种通知信息调控方法、设备及计算机可读存储介质 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3690678A4 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210056220A1 (en) * | 2019-08-22 | 2021-02-25 | Mediatek Inc. | Method for improving confidentiality protection of neural network model |
Also Published As
Publication number | Publication date |
---|---|
CA3105663C (en) | 2023-12-12 |
AU2019385776A1 (en) | 2021-01-28 |
AU2019385776B2 (en) | 2023-07-06 |
JP2021535644A (ja) | 2021-12-16 |
EP3690678A1 (en) | 2020-08-05 |
KR20210022740A (ko) | 2021-03-03 |
CA3105663A1 (en) | 2020-05-28 |
CN111209904A (zh) | 2020-05-29 |
JP7186857B2 (ja) | 2022-12-09 |
EP3690678A4 (en) | 2021-03-10 |
US20200372250A1 (en) | 2020-11-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020103404A1 (zh) | 一种业务处理的方法以及相关装置 | |
CN110045908B (zh) | 一种控制方法和电子设备 | |
CN109409161B (zh) | 图形码识别方法、装置、终端及存储介质 | |
CN115473957B (zh) | 一种图像处理方法和电子设备 | |
CN108399349B (zh) | 图像识别方法及装置 | |
CN111738122B (zh) | 图像处理的方法及相关装置 | |
US20230245398A1 (en) | Image effect implementing method and apparatus, electronic device and storage medium | |
CN110059686B (zh) | 字符识别方法、装置、设备及可读存储介质 | |
US20220262035A1 (en) | Method, apparatus, and system for determining pose | |
CN115079886B (zh) | 二维码识别方法、电子设备以及存储介质 | |
WO2022073417A1 (zh) | 融合场景感知机器翻译方法、存储介质及电子设备 | |
WO2022179604A1 (zh) | 一种分割图置信度确定方法及装置 | |
EP4175285A1 (en) | Method for determining recommended scene, and electronic device | |
WO2022156473A1 (zh) | 一种播放视频的方法及电子设备 | |
CN110045958B (zh) | 纹理数据生成方法、装置、存储介质及设备 | |
CN113220176A (zh) | 基于微件的显示方法、装置、电子设备及可读存储介质 | |
WO2022143314A1 (zh) | 一种对象注册方法及装置 | |
WO2022161011A1 (zh) | 生成图像的方法和电子设备 | |
US9525825B1 (en) | Delayed image data processing | |
CN115150542B (zh) | 一种视频防抖方法及相关设备 | |
WO2022089216A1 (zh) | 一种界面显示的方法和电子设备 | |
CN114071024A (zh) | 图像拍摄方法、神经网络训练方法、装置、设备和介质 | |
WO2023216957A1 (zh) | 一种目标定位方法、系统和电子设备 | |
CN116761082B (zh) | 图像处理方法及装置 | |
WO2024088130A1 (zh) | 显示方法和电子设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2019874765 Country of ref document: EP Effective date: 20200429 |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19874765 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 3105663 Country of ref document: CA |
|
ENP | Entry into the national phase |
Ref document number: 20217002422 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2019385776 Country of ref document: AU Date of ref document: 20190509 Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2021506473 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |