CN110336892B - Multi-device cooperation method and device - Google Patents

Multi-device cooperation method and device Download PDF

Info

Publication number
CN110336892B
CN110336892B CN201910676002.5A CN201910676002A CN110336892B CN 110336892 B CN110336892 B CN 110336892B CN 201910676002 A CN201910676002 A CN 201910676002A CN 110336892 B CN110336892 B CN 110336892B
Authority
CN
China
Prior art keywords
module
cooperation
information
parameter
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910676002.5A
Other languages
Chinese (zh)
Other versions
CN110336892A (en
Inventor
张伟萌
戴帅湘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Weixin Technology Co.,Ltd.
Original Assignee
Beijing Moran Cognitive Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Moran Cognitive Technology Co Ltd filed Critical Beijing Moran Cognitive Technology Co Ltd
Priority to CN201910676002.5A priority Critical patent/CN110336892B/en
Publication of CN110336892A publication Critical patent/CN110336892A/en
Application granted granted Critical
Publication of CN110336892B publication Critical patent/CN110336892B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • H04L67/125Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks involving control of end-device applications over a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/52Network services specially adapted for the location of the user terminal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/021Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/023Services making use of location information using mutual or relative location information between multiple location based services [LBS] targets or of distance thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • H04W4/027Services making use of location information using location based information parameters using movement velocity, acceleration information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/48Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for in-vehicle communication

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a multi-device cooperation method, which comprises the following steps: s101, receiving a call request; s102, acquiring the position information of at least one second device which can participate in the cooperation; s103, judging whether the at least one second device accords with a first preset area range, and executing the step S104 if the at least one second device accords with the first preset area range; s104, acquiring scene information and a use state of the at least one second device; s105, determining a first cooperation parameter of a camera module, a microphone module and/or a display module of the at least one second device based on the scene information and the use state; and S106, generating a control instruction based on the first cooperation parameter, wherein the control instruction is used for calling a camera module, a microphone module and/or a display module of the at least one second device. By the method, cooperation among the devices can be optimized, the device display efficiency under different scenes is improved, and user experience is improved.

Description

Multi-device cooperation method and device
Technical Field
The embodiment of the invention relates to the technical field of information processing, in particular to a multi-device cooperation method and device, terminal equipment, a cloud server and a computer readable storage medium.
Background
With the development and progress of the internet of things, the interconnection technology between devices gradually enters a plurality of fields such as industry, medical treatment, communication, automobiles, intelligent home services and the like. Although linkage of multiple devices is applied to daily life of people in a certain scale, related technology development is still not perfect, especially, in intelligent home and intelligent vehicle-mounted environments, the devices are multiple, and a huge development space still exists for interactive cooperation among the devices.
How to invoke different device displays in a specific scene, how to make the most appropriate device selection or device cooperation, and how to provide better user experience become an urgent problem to be solved.
Disclosure of Invention
The invention provides a multi-device cooperation method, a multi-device cooperation device, a terminal device, a cloud server and a computer-readable storage medium.
The invention provides a multi-device cooperation method, which is characterized by comprising the following steps:
s101, receiving a call request;
s102, acquiring the position information of at least one second device which can participate in the cooperation;
s103, judging whether the at least one second device accords with a first preset area range, and executing the step S104 if the at least one second device accords with the first preset area range;
s104, acquiring scene information of the at least one second device and the use state of the second device;
s105, determining a first cooperation parameter of a camera module, a microphone module and/or a display module of the at least one second device based on the scene information and the use state;
and S106, generating a control instruction based on the first cooperation parameter, wherein the control instruction is used for calling a camera module, a microphone module and/or a display module of the at least one second device.
The present invention also provides a multi-device cooperation apparatus, which is characterized in that the apparatus includes:
an input module for receiving a call request;
the acquisition module is used for acquiring the position information of at least one second device which can participate in the cooperation;
the position judgment module is used for judging whether the at least one second device accords with a first preset area range, and if the at least one second device accords with the first preset area range, feeding back a judgment success message;
the obtaining module is configured to obtain scene information of the at least one second device and a use state of the second device;
a parameter determination module, configured to determine a first cooperation parameter of a camera module, a microphone module, and/or a display module of the at least one second device based on the scene information and the usage status;
and the control module is used for generating a control instruction based on the first cooperation parameter, and the control instruction is used for calling a camera module, a microphone module and/or a display module of the at least one second device.
The invention also provides a terminal device, characterized in that it comprises a processor and a memory, in which a computer program is stored that is executable on the processor, said computer program implementing the method as described above when executed by the processor.
The present invention also provides a cloud server, characterized in that the cloud server comprises a processor and a memory, wherein the memory stores a computer program which can run on the processor, and the computer program realizes the method when being executed by the processor.
The invention also provides a computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program is executable on a processor, and when executed implements the method as described above.
By the method, the cooperation among the devices can be optimized, the device display efficiency under different scenes is improved, and the user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a multi-device cooperation method in an embodiment of the present invention.
Fig. 2 is a multi-device cooperation apparatus in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings. The embodiments and specific features of the embodiments of the present invention are detailed descriptions of technical solutions of the embodiments of the present invention, and are not limited to technical solutions of the present invention, and the technical features of the embodiments and the embodiments of the present invention may be combined with each other without conflict.
The method can be applied to any devices or equipment with interaction capacity, such as computers, mobile phones, tablet computers, vehicle machines, vehicle-mounted terminals, or other intelligent household equipment, such as set-top boxes, intelligent household appliances and the like.
Example one
Referring to fig. 1, an embodiment of the present invention provides a multi-device cooperation method, where the method includes:
s101, receiving a call request;
s102, acquiring the position information of at least one second device which can participate in the cooperation;
s103, judging whether the at least one second device accords with a first preset area range, and executing the step S104 if the at least one second device accords with the first preset area range;
s104, acquiring scene information of the at least one second device and the use state of the second device;
s105, determining a first cooperation parameter of a camera module, a microphone module and/or a display module of the at least one second device based on the scene information and the use state;
and S106, generating a control instruction based on the first cooperation parameter, wherein the control instruction is used for calling a camera module, a microphone module and/or a display module of the at least one second device.
Specifically, the step S101 includes
A request for a call is received and,
judging whether the device cooperation mode is started or not;
if yes, go to step S102.
If not, a reminding message can be sent to the user, and the reminding message can be a voice message or a text message; for example, by displaying "whether to turn on device cooperation" at the called user terminal, "determination" or "cancellation" is selected by the user, and the device cooperation mode is turned on or off according to the user's selection. In addition, the voice can be combined, a user is prompted to ask whether the equipment cooperation needs to be started or not, the voice feedback of the user is monitored, the voice feedback result of the user is extracted and recognized, for example, the user answers yes, start or other positive sentences, the user is judged to be the mode that the equipment cooperation mode needs to be started, and the background equipment cooperation is controlled to be started.
Specifically, for devices participating in the collaboration, a device collaboration list may be stored locally or in the cloud. The device cooperation list may be dynamically updated. The device cooperation list includes: device ID, associated account, associated network, interactive device ID, open rights category, and the like.
The device ID may be a machine code, MAC address, or device name of the device; the associated account may be an account name of an application platform to which the device belongs, for example, a registered mobile phone number, a registered mailbox number; the associated network may be a network ID for device access, which may be updated in real-time based on networking status; the interactive device ID may be a machine code, an MAC address, or a device name of the interactive device; the open permission type may be fully open or partially open, such as a display module, a voice module, a camera module, a storage module, and the like. The device and the interactive device may belong to the same application platform or may belong to different application platforms.
For example, device A, B, C, D may open interaction rights so that all or a portion of the devices associated with the same network or under the same account may invoke all or a portion of their functional usage rights. As shown in the following table, the interactive devices of the device a include devices B and C, where the devices B and C open all permissions to the device a, and the device a end/cloud may select some or all modules of the device B and/or the device C according to circumstances to implement cooperation.
TABLE 1 device collaboration List
Figure BDA0002143299880000041
In addition, a 3-bit identification field can be adopted to indicate a camera module, a voice module and a display module permission opening list of the second device, for example, for a device B in the table above, interactive devices of the device B include devices a, C and D, wherein the device a opens all permissions; the equipment C opens a camera module and a voice module; the open display module of equipment D, the module of making a video recording, the identification field that then can utilize 3 bits indicates: such as device a (111); device C (110), device D (101).
In addition, in the case that the second device has a plurality of camera modules, a plurality of voice modules, and a plurality of display modules, the second device may be identified by extending the identification field, for example, if there are 4 camera modules, 4 voice modules, and 4 display modules in the vehicle, the identification field may be set to "xxxx, xxxx, xxxx". For example, "111101011100" indicates that four camera modules may participate in device cooperation, that the 1 st and 3 rd voice modules do not participate in device cooperation, that the 2 nd and 4 th voice modules participate in device cooperation, that the 1 st and 2 nd display modules participate in device cooperation, and that the 3 rd and 4 th display modules do not participate in device cooperation. Other numbers of modules may be indicated based on the above.
If the user does not select the open permission type when the open permission is initially set, all permissions are opened by default, namely the default identification field is '111'.
Specifically, the step S102 includes
Acquiring the position information of the at least one second device, wherein the position information of the second device comprises geographical position information and acceleration information;
in this embodiment, it is assumed that the second device is a vehicle including a vehicle-mounted device or an on-vehicle device.
The vehicle-mounted GPS can be used for acquiring the geographic position information of the vehicle in real time, measuring the acceleration information of the vehicle through the acceleration sensor, and acquiring the movement speed, the movement direction and the like of the vehicle.
Specifically, the step S103 includes
Judging whether the relative distance between the geographical position information and the current geographical position of the user is smaller than or equal to a first threshold value or not;
judging whether the acceleration information is consistent with the current acceleration information of the user;
and if the relative distance is smaller than or equal to a first threshold value and the acceleration information is consistent, determining that a first preset area range is met.
Specifically, geographical position information and acceleration information of the called user equipment are obtained, the geographical position information and the acceleration information are compared with geographical position information and acceleration information of the vehicle, a straight-line distance between the user equipment and the geographical position of the vehicle is calculated, and if the geographical position information and the acceleration information are smaller than or equal to a first threshold value, the user equipment is considered to be close to the vehicle; and judging whether the acceleration information of the user equipment and the vehicle is consistent, if so, determining that the user is located in the vehicle, namely, determining that the second equipment, namely the vehicle, is in line with the first predetermined area range.
Specifically, in the step S104
Determining the current scene of the second equipment according to the current environment information and the user state information;
and determining the use state of the second equipment according to the current use condition of the camera module, the microphone module and/or the display module of the at least one second equipment.
For example, the scene to which the vehicle belongs currently is determined by monitoring the running environment, the running state of the vehicle, the passenger distribution in the vehicle and the passenger emotion; and determining the use state information of the vehicle according to the current use application condition of the vehicle machine or the vehicle-mounted equipment. For example, the currently played broadcast occupies the voice module, the display module and the camera module are idle, and similarly, the previous 3-bit identification field may also be used to indicate the usage status of the module, for example, the usage identification information is set to 101 for the usage status in the example.
Multiple scenarization patterns may be created in advance, which may include, for example: commute, friend go out, lovers go out, family go out, quiet, boring, business, entertainment and other scenes. The above scene division is only an illustrative list, and the present invention includes, but is not limited to, the above division manner.
For example, when the current driving environment is night, the driving state of the vehicle is traffic jam, the passengers in the vehicle are only drivers, and the emotions of the drivers are judged to be bad by analyzing the expressions and limbs of the drivers, so that the scene is judged to be boring.
Specifically, in step S105, based on the scene information and the usage status, a first cooperation parameter of a camera module, a microphone module, and/or a display module of the at least one second device is determined; the first collaboration parameter may be obtained based on a recommended collaboration mode corresponding to a scene preset by the query.
For example, a recommended cooperation mode list is created, and cooperation parameters recommended by each scene are stored, and the cooperation parameters may recommend a cooperation mode for a camera module, a microphone module, and/or a display module in the second device, for example. As shown in the table below, in a boring scenario, the recommended collaboration mode is that three modules of the second device may all participate in the collaboration of the call, so that the call is performed using the second device, e.g., a camera module, a microphone module, and/or a display module in a vehicle.
Table 2: recommending a collaboration mode list
Scene Collaboration parameters
Boring to 111
Entertainment system 100
Family tour 111
As in the previous example, at this time, the voice module is occupied in combination with the boring scene and the currently played broadcast, the display module and the camera module are in idle use states, and the first cooperation parameter is determined as that the display module and the camera module can participate in cooperation. Or specifically with field "101".
Specifically, the step S106 further includes
Acquiring the type of the call request;
determining a second collaboration parameter based on the type of the call request;
and adjusting the first cooperation parameter according to the second cooperation parameter, and generating a control instruction based on the adjusted cooperation parameter.
For example, it is determined whether the type of call request is driver called, co-driver called, or rear seat passenger called.
Specifically, the called party may be determined based on the in-vehicle sensing device, for example, a sound source and a vibration source are detected, and an in-vehicle person motion is captured, or based on a preset user account list, when the user gets on or in the vehicle, the in-vehicle person distribution is recorded, for example, when the user gets on the vehicle, the user is identified through a biometric feature, a temporary correspondence between the user called account and the in-vehicle identity is created through the user account list and matching the corresponding user called account, for example, the user a is determined after identification, is located in a passenger seat in the vehicle, and has a terminal called account of 123456789, and a temporary correspondence is created (passenger seat, 123456789). After the call request aiming at the user A is obtained, the called account is extracted, and the current called passenger is determined by matching the temporary corresponding relation, namely the type of the call request is the called passenger.
The use authorities of a plurality of modules in the vehicle for the driver, the co-driver and the passenger, namely the second cooperation parameters, are preset.
For example, different use authorities can be set for different in-vehicle member identities aiming at a camera module, a voice module and a display module in a vehicle, for example, the camera module, the voice module and the display module can be used for the driver identity, the camera module and the voice module can be used for the co-driver identity, and the camera module can be used for the passenger identity. The identification field may also be used to store information related to the usage rights, such as driver, 111; co-pilot, 110; the passenger 100.
For the case of multiple modules, for example, a car has 4 camera modules, 3 voice modules, and 1 display module, and its usage right may be set to "1100, 111, 1" for a driver, to "1100, 010, 0" for a co-driver, and to "0011, 000, 0" for a passenger.
The first collaboration parameter may be adjusted using the second collaboration parameter, e.g., the first collaboration parameter may be updated based on an intersection of the first collaboration parameter and the second collaboration parameter.
Specifically, the step S106 includes
Acquiring user cooperation preference information;
and adjusting the first collaboration parameter in combination with the collaboration preference information, and generating a control instruction based on the adjusted collaboration parameter.
For example, for a user, based on a history of listening thereof, a collaborative preference of the user is counted, as in the previous example, a history listening collaborative record of a driver terminal is obtained, for example, a historical listening collaborative record of a user terminal in a certain time may be counted to obtain a collaborative preference of the user; for example, the user a usually wears a bluetooth headset, and in the statistical cooperation history, the probability of only calling the car-mounted camera is high. User B typically likes the loud speaker mode and, based on his historical statistics, prefers to use the display module of the car machine for video interaction. Its collaboration preference information may also be recorded as before: (user: XX, collaboration preference: 100).
As in the previous example, the voice module is occupied by the chat scene and the currently played broadcast, the display module and the camera module are in idle use states, and the first collaboration parameter is determined to be "101"; at this time, the updated first collaboration parameter is determined to be "100" in combination with the user collaboration preference "100". Thereby generating control information based on the updated first cooperation parameter "100" to call up only the camera module of the second apparatus.
In addition, for the case of full transfer or partial cooperation, a switching option can be set on the user terminal side or the second device side, for example, a switching key is displayed on a screen, a click of a user is received, or voice control of the user is received; and selecting and adjusting the cooperation mode according to the switching determined by the user, for example, if the user wants the voice to be played through the car machine in the call process or the picture is displayed by the car machine, the voice or the display can be transferred to the car machine based on the switching key. If the voice module or the display module in the in-vehicle device is occupied at this time, for example, the playing broadcast occupies the voice module, and the display navigation occupies the display module, the current operation can be paused based on the switching instruction, for example, the playing broadcast is paused or the display navigation is paused, and then the voice of the video call is played in the in-vehicle device or the video picture is displayed in the in-vehicle device.
By the method, the cooperation among the devices can be optimized, the device display efficiency under different scenes is improved, and the user experience is improved.
Example two
Referring to fig. 2, a second embodiment of the present invention further provides a multi-device cooperation apparatus, where the apparatus includes:
an input module for receiving a call request;
the acquisition module is used for acquiring the position information of at least one second device which can participate in the cooperation;
the position judgment module is used for judging whether the at least one second device accords with a first preset area range, and if the at least one second device accords with the first preset area range, feeding back a judgment success message;
the obtaining module is configured to obtain scene information of the at least one second device and a use state of the second device;
a parameter determination module, configured to determine a first cooperation parameter of a camera module, a microphone module, and/or a display module of the at least one second device based on the scene information and the usage status;
and the control module is used for generating a control instruction based on the first cooperation parameter, and the control instruction is used for calling a camera module, a microphone module and/or a display module of the at least one second device.
Specifically, the input module is further configured to receive a call request and determine whether an apparatus cooperation mode is started;
if the mobile terminal is started, sending a notification message to a position judgment module;
in particular, the acquisition module is used for
And acquiring the position information of the at least one second device, wherein the position information of the second device comprises the geographical position information and the acceleration information.
Specifically, the position judgment module is used for
Judging whether the relative distance between the geographical position information and the current geographical position of the user is smaller than or equal to a first threshold value or not;
judging whether the acceleration information is consistent with the current acceleration information of the user;
and if the relative distance is smaller than or equal to a first threshold value and the acceleration information is consistent, determining that a first preset area range is met.
In particular, the acquisition module is used for
Determining the current scene of the second equipment according to the current environment information and the user state information;
and determining the use state of the second equipment according to the current use condition of the camera module, the microphone module and/or the display module of the at least one second equipment.
Specifically, the obtaining module is further configured to obtain a type of the call request;
the parameter determination module is further configured to determine a second collaboration parameter based on the type of the call request and the context information;
the control module is further configured to adjust the first cooperation parameter according to a second cooperation parameter, and generate a control instruction based on the adjusted cooperation parameter.
Specifically, the obtaining module is further configured to obtain user collaboration preference information;
the control module is further configured to adjust the first collaboration parameter in combination with the collaboration preference information, and generate a control instruction based on the adjusted collaboration parameter.
The invention also provides a terminal device, characterized in that it comprises a processor and a memory, in which a computer program is stored that is executable on the processor, said computer program implementing the method as described above when executed by the processor.
The terminal equipment comprises but is not limited to a computer, a mobile phone, a tablet personal computer, a vehicle machine, a vehicle-mounted terminal, a set-top box and an intelligent household appliance.
The present invention also provides a cloud server, characterized in that the cloud server comprises a processor and a memory, wherein the memory stores a computer program which can run on the processor, and the computer program realizes the method when being executed by the processor.
The invention provides a computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program is executable on a processor, and when executed implements a method as described above.
Any combination of one or more computer-readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. The computer-readable storage medium may include: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), a flash memory, an erasable programmable read-only memory (EPROM), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Computer program code for carrying out operations of the present invention may be written in one or more programming languages, or a combination thereof.
The above description is only an example for the convenience of understanding the present invention, and is not intended to limit the scope of the present invention. In the specific implementation, a person skilled in the art may change, add, or reduce the components of the apparatus according to the actual situation, and may change, add, reduce, or change the order of the steps of the method according to the actual situation without affecting the functions implemented by the method.
While embodiments of the invention have been shown and described, it will be understood by those skilled in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents, and all changes that come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims (17)

1. A method for multi-device collaboration, the method comprising:
s101, receiving a call request;
s102, acquiring the position information of at least one second device which can participate in the cooperation;
s103, judging whether the at least one second device accords with a first preset area range, and executing the step S104 if the at least one second device accords with the first preset area range;
s104, acquiring scene information of the at least one second device and the use state of the second device;
s105, determining a first cooperation parameter of a camera module, a microphone module and/or a display module of the at least one second device based on the scene information and the use state;
and S106, generating a control instruction based on the first cooperation parameter, wherein the control instruction is used for calling a camera module, a microphone module and/or a display module of the at least one second device.
2. The method according to claim 1, wherein the step S101 comprises
A request for a call is received and,
judging whether the device cooperation mode is started or not;
if yes, go to step S102.
3. The method according to claim 1, wherein the step S102 comprises
And acquiring the position information of the at least one second device, wherein the position information of the second device comprises geographical position information and acceleration information.
4. The method according to claim 3, wherein the step S103 comprises
Judging whether the relative distance between the geographical position information and the current geographical position of the user is smaller than or equal to a first threshold value or not;
judging whether the acceleration information is consistent with the current acceleration information of the user;
and if the relative distance is smaller than or equal to a first threshold value and the acceleration information is consistent, determining that a first preset area range is met.
5. The method according to claim 4, wherein in step S104
Determining the current scene of the second equipment according to the current environment information and the user state information;
and determining the use state of the second equipment according to the current use condition of the camera module, the microphone module and/or the display module of the at least one second equipment.
6. The method according to claim 1, wherein the step S106 further comprises
Acquiring the type of the call request;
determining a second collaboration parameter based on the type of the call request;
and adjusting the first cooperation parameter according to the second cooperation parameter, and generating a control instruction based on the adjusted cooperation parameter.
7. The method according to claim 1, wherein the step S106 comprises
Acquiring user cooperation preference information;
and adjusting the first collaboration parameter in combination with the collaboration preference information, and generating a control instruction based on the adjusted collaboration parameter.
8. An apparatus for multi-device cooperation, the apparatus comprising:
an input module for receiving a call request;
the acquisition module is used for acquiring the position information of at least one second device which can participate in the cooperation;
the position judgment module is used for judging whether the at least one second device accords with a first preset area range, and if the at least one second device accords with the first preset area range, feeding back a judgment success message;
the obtaining module is configured to obtain scene information of the at least one second device and a use state of the second device;
a parameter determination module, configured to determine a first cooperation parameter of a camera module, a microphone module, and/or a display module of the at least one second device based on the scene information and the usage status;
and the control module is used for generating a control instruction based on the first cooperation parameter, and the control instruction is used for calling a camera module, a microphone module and/or a display module of the at least one second device.
9. The apparatus of claim 8,
the input module is further used for receiving a call request and judging whether the equipment cooperation mode is started or not;
if the mobile terminal is started, a notification message is sent to the position judgment module.
10. The apparatus of claim 8, wherein the obtaining module is configured to obtain the data from the wireless device
And acquiring the position information of the at least one second device, wherein the position information of the second device comprises geographical position information and acceleration information.
11. The apparatus of claim 10, wherein the position determining module is configured to determine the position of the mobile device
Judging whether the relative distance between the geographical position information and the current geographical position of the user is smaller than or equal to a first threshold value or not;
judging whether the acceleration information is consistent with the current acceleration information of the user;
and if the relative distance is smaller than or equal to a first threshold value and the acceleration information is consistent, determining that a first preset area range is met.
12. The apparatus of claim 11, wherein the obtaining module is configured to obtain the data from the wireless device
Determining the current scene of the second equipment according to the current environment information and the user state information;
and determining the use state of the second equipment according to the current use condition of the camera module, the microphone module and/or the display module of the at least one second equipment.
13. The apparatus of claim 8,
the obtaining module is further configured to obtain a type of the call request;
the parameter determination module is further configured to determine a second collaboration parameter based on the type of the call request;
the control module is further configured to adjust the first cooperation parameter according to a second cooperation parameter, and generate a control instruction based on the adjusted cooperation parameter.
14. The apparatus of claim 8,
the acquiring module is further used for acquiring user cooperation preference information;
the control module is further configured to adjust the first collaboration parameter in combination with the collaboration preference information, and generate a control instruction based on the adjusted collaboration parameter.
15. A terminal device, characterized in that the terminal device comprises a processor and a memory, in which a computer program is stored which is executable on the processor, which computer program, when being executed by the processor, realizes the method according to any one of claims 1 to 7.
16. A cloud server, characterized in that the cloud server comprises a processor and a memory, in which a computer program is stored which is executable on the processor, which computer program, when being executed by the processor, realizes the method according to any one of claims 1 to 7.
17. A computer-readable storage medium, in which a computer program that is executable on a processor is stored, which computer program, when being executed, carries out the method according to any one of claims 1 to 7.
CN201910676002.5A 2019-07-25 2019-07-25 Multi-device cooperation method and device Active CN110336892B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910676002.5A CN110336892B (en) 2019-07-25 2019-07-25 Multi-device cooperation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910676002.5A CN110336892B (en) 2019-07-25 2019-07-25 Multi-device cooperation method and device

Publications (2)

Publication Number Publication Date
CN110336892A CN110336892A (en) 2019-10-15
CN110336892B true CN110336892B (en) 2020-10-02

Family

ID=68147347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910676002.5A Active CN110336892B (en) 2019-07-25 2019-07-25 Multi-device cooperation method and device

Country Status (1)

Country Link
CN (1) CN110336892B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113099170B (en) * 2020-01-09 2023-05-12 博泰车联网科技(上海)股份有限公司 Method, apparatus and computer storage medium for information processing
JP7419973B2 (en) * 2020-06-01 2024-01-23 トヨタ自動車株式会社 Information processing device, information processing method, program, and mobile device
CN114222020B (en) * 2020-09-03 2022-11-25 华为技术有限公司 Position relation identification method and device and readable storage medium
CN114898751B (en) * 2022-06-15 2024-04-23 中国电信股份有限公司 Automatic configuration method and system, storage medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102016208837A1 (en) * 2016-05-23 2017-11-23 Bayerische Motoren Werke Aktiengesellschaft Personalized comfort function for an occupant of a motor vehicle
CN207039580U (en) * 2017-03-20 2018-02-23 上海贝壳供应链管理有限公司 A kind of vehicle-mounted wisdom box
CN105882598B (en) * 2015-10-19 2018-06-22 睿驰智能汽车(广州)有限公司 Vehicle operating control method, device and system
CN207543139U (en) * 2017-11-10 2018-06-26 江苏大学 A kind of credible onboard system of the multi-internet integration based on TPM
CN109347709A (en) * 2018-10-26 2019-02-15 北京蓦然认知科技有限公司 A kind of smart machine control method, apparatus and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102739478A (en) * 2011-12-29 2012-10-17 上海理滋芯片设计有限公司 Intelligent home system with intelligent home health care function, terminal and method
CN103580968A (en) * 2013-11-12 2014-02-12 中国联合网络通信有限公司物联网研究院 Smart home system based on internet of things cloud computing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105882598B (en) * 2015-10-19 2018-06-22 睿驰智能汽车(广州)有限公司 Vehicle operating control method, device and system
DE102016208837A1 (en) * 2016-05-23 2017-11-23 Bayerische Motoren Werke Aktiengesellschaft Personalized comfort function for an occupant of a motor vehicle
CN207039580U (en) * 2017-03-20 2018-02-23 上海贝壳供应链管理有限公司 A kind of vehicle-mounted wisdom box
CN207543139U (en) * 2017-11-10 2018-06-26 江苏大学 A kind of credible onboard system of the multi-internet integration based on TPM
CN109347709A (en) * 2018-10-26 2019-02-15 北京蓦然认知科技有限公司 A kind of smart machine control method, apparatus and system

Also Published As

Publication number Publication date
CN110336892A (en) 2019-10-15

Similar Documents

Publication Publication Date Title
CN110336892B (en) Multi-device cooperation method and device
AU2014201252B2 (en) Method and apparatus for providing state information
US9602459B2 (en) Selectable mode based social networking interaction systems and methods
CN108769431A (en) Audio play control method, device, storage medium and mobile terminal
AU2017201663A1 (en) Implicit association and polymorphism driven human machine interaction
KR20190032628A (en) Conditional disclosure of personal-controlled content in a group context
CN105554027A (en) Resource sharing method and device
US9412394B1 (en) Interactive audio communication system
KR102061815B1 (en) Method for assisting a user of a motor vehicle, multimedia system, and motor vehicle
US20100082515A1 (en) Environmental factor based virtual communication systems and methods
CN105451202A (en) Message processing method and device
CN113271380A (en) Audio processing method and device
CA2742417A1 (en) Service center support
CN102932516A (en) Apparatus for communication between a vehicle based computing system and a remote application
CN111319566A (en) Voice recognition function link control system and method for vehicle
US9183563B2 (en) Electronic questionnaire
CN105677023A (en) Information presenting method and device
CN111294606A (en) Live broadcast processing method and device, live broadcast client and medium
US11665244B2 (en) Selecting user profiles on platforms based on optimal persona of a user in a given context
US11729123B2 (en) Systems and methods for sending content
CN110913276A (en) Data processing method, device, server, terminal and storage medium
CN110601966A (en) Method, electronic device and computer readable medium for playing messages
US11050499B1 (en) Audience response collection and analysis
CN106533910B (en) Note display methods and device
CN106713613B (en) Note display methods and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231023

Address after: 307-1, No. 18 Zhigu Street, Tangjiawan Town, High tech Zone, Zhuhai City, Guangdong Province, 519000

Patentee after: Guangdong Weixin Technology Co.,Ltd.

Address before: Room 401, gate 2, block a, Zhongguancun 768 Creative Industry Park, 5 Xueyuan Road, Haidian District, Beijing 100083

Patentee before: BEIJING MORAN COGNITIVE TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right