CN115118536A - Sharing method, control device and computer-readable storage medium - Google Patents

Sharing method, control device and computer-readable storage medium Download PDF

Info

Publication number
CN115118536A
CN115118536A CN202110290205.8A CN202110290205A CN115118536A CN 115118536 A CN115118536 A CN 115118536A CN 202110290205 A CN202110290205 A CN 202110290205A CN 115118536 A CN115118536 A CN 115118536A
Authority
CN
China
Prior art keywords
target object
sharing
instruction
information
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110290205.8A
Other languages
Chinese (zh)
Inventor
应臻恺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Pateo Network Technology Service Co Ltd
Original Assignee
Shanghai Pateo Network Technology Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Pateo Network Technology Service Co Ltd filed Critical Shanghai Pateo Network Technology Service Co Ltd
Priority to CN202110290205.8A priority Critical patent/CN115118536A/en
Publication of CN115118536A publication Critical patent/CN115118536A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2805Home Audio Video Interoperability [HAVI] networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2816Controlling appliance services of a home automation network by calling their functionalities
    • H04L12/282Controlling appliance services of a home automation network by calling their functionalities based on user interaction within the home
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services

Abstract

The invention discloses a sharing method, control equipment and a computer readable storage medium, wherein the sharing method comprises the following steps: acquiring an instruction so as to determine a target object according to the instruction; acquiring spatial distribution information of a space where a target object is located so as to determine the position of the target object in the space; and presenting the activity condition of the target object based on the position of the target object under the condition that the instruction is a display instruction; and in the case that the instruction is a sharing instruction, determining the first device based on the position of the target object for presenting the shared content indicated by the sharing instruction. The sharing method, the control device and the computer readable storage medium provided by the invention can accurately determine the target object and the position of the target object, quickly present the activity condition of the target object and/or share the shared content to the target object, improve the sharing efficiency and accuracy and realize quick and accurate sharing.

Description

Sharing method, control device, and computer-readable storage medium
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a sharing method, a control device, and a computer-readable storage medium.
Background
With the development of science and technology, the intelligent home is continuously in an internet. Compared with the common home, the intelligent home has the advantages that the traditional living function is realized, the characteristics of network communication, information household appliances, equipment automation and the like are realized, an intelligent home system which is not available in the common home is installed in the intelligent home, the system has the omnibearing information exchange function, the home equipment and the external equipment can keep information interaction at any time, and the normal life of people is ensured, and meanwhile, the sharing requirement of people is greatly met.
However, the current sharing method requires the user to select the peer sharing device in advance, and when the number of peer sharing devices is large, the selection process is time-consuming; when there are multiple peer sharing devices of the same type, for example: when a plurality of televisions exist, it is impossible to determine which peer sharing device the shared content can be received by the shared object.
Disclosure of Invention
An object of the present invention is to provide a sharing method, a control device, and a computer-readable storage medium, which enable to accurately determine a target object and a position of the target object during a sharing process, and quickly present an activity condition of the target object and/or share shared content with the target object, so as to improve sharing efficiency and accuracy and achieve quick and accurate sharing.
Another object of the present invention is to provide a sharing method, which is advantageous in that spatial distribution information and a position of a device in a space can be obtained based on a signal characteristic of the device and/or a collected image, so that the spatial distribution information can be accurately obtained, and the device in the space can be accurately located.
Another object of the present invention is to provide a sharing method, which is advantageous in that the position of the target object can be determined based on the position of the device acquiring the target object information in space, so as to accurately locate the target object.
Another object of the present invention is to provide a sharing method, which is advantageous in that the activity of the target object can be presented in the form of text and/or images, so that the activity of the target object can be presented flexibly and variously.
Another object of the present invention is to provide a sharing method, which is advantageous in that different devices can be selected for display based on different sharing contents, so as to flexibly select the sharing devices.
Another object of the present invention is to provide a sharing method, which is advantageous in that a display form of shared content can be determined based on a distance between a target object and a sharing device, so that the shared content can be flexibly displayed.
To achieve the above and related objects, the present invention provides a sharing method, comprising:
acquiring an instruction so as to determine a target object according to the instruction;
acquiring spatial distribution information of a space where the target object is located so as to determine the position of the target object in the space; and
if the instruction is a display instruction, presenting activity of the target object based on the position of the target object;
and under the condition that the instruction is a sharing instruction, determining first equipment based on the position of the target object for presenting the shared content indicated by the sharing instruction. Therefore, in the sharing process, the target object and the position of the target object can be accurately determined, the activity condition of the target object can be quickly presented and/or the shared content can be shared with the target object, the sharing efficiency and accuracy are improved, and quick and accurate sharing is realized.
The method for acquiring the spatial distribution information comprises the following steps:
acquiring signal characteristics of at least one second device; and
based on the signal features, the spatial distribution information and the position of the at least one second device in the space are obtained. Therefore, the spatial distribution information is accurately acquired, and the equipment in the space is accurately positioned.
The method for acquiring the spatial distribution information further comprises the following steps:
acquiring an image acquired by at least one second device; and
based on the image, the spatial distribution information and the position of the at least one second device in the space are obtained. Therefore, the spatial distribution information is accurately acquired, and the equipment in the space is accurately positioned.
Wherein the position of the target object in the space is determined based on:
acquiring target object information via the at least one second device; and
determining the position of the target object based on the position of the second device in the space, wherein the position of the target object information is acquired. Thereby, the target object is accurately positioned.
The method for acquiring the target object information comprises the following steps:
acquiring at least one object information through an infrared induction and/or voiceprint recognition and/or camera photographing analysis method and/or wearable equipment and/or mobile equipment, wherein the object information comprises at least one of physiological characteristics and an identification number; and
determining the target object information among the at least one object information. Therefore, the target object information is accurately collected.
Wherein the manner of presenting the activity condition of the target object comprises: text presentation and/or image presentation. Therefore, the activity condition of the target object is presented flexibly and diversely.
Wherein the first device comprises an audio device and an audio-video device, the determining the first device based on the position of the target object for presenting the shared content indicated by the sharing instruction comprises at least one of:
under the condition that the sharing instruction is used for sharing the sharing content comprising voice and/or characters to the target object, displaying the sharing content through the audio equipment matched with the position of the target object;
under the condition that the sharing instruction is to share the sharing content comprising videos and/or pictures to the target object, displaying the sharing content through the audio and video equipment matched with the position of the target object; and
and under the condition that the sharing instruction is to share the sharing content comprising videos and/or pictures to the target object, converting the sharing content into audio sharing content, and displaying the audio sharing content through the audio equipment matched with the position of the target object. Therefore, the sharing device is flexibly selected.
Wherein the determining of the first device based on the position of the target object for presenting the shared content indicated by the sharing instruction further comprises:
determining the first device matching the location of the target object based on the location of the target object; and
when the distance between the position of the target object and the position of the first device is larger than a first threshold value, displaying prompt information through the first device;
when the distance is less than or equal to the first threshold, presenting the shared content via the first device. Therefore, the shared content is flexibly displayed.
The invention also provides a control device, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor implements the steps of the sharing method when executing the computer program. Therefore, the target object and the position of the target object are accurately determined, the activity condition of the target object is quickly presented and/or the sharing content is shared to the target object, the sharing efficiency and accuracy are improved, and quick and accurate sharing is realized.
The invention further provides a computer-readable storage medium, in which a computer program is stored, which, when executed by a processor, implements the steps of the above-described sharing method.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical means of the present invention more clearly understood, the present invention may be implemented in accordance with the content of the description, and in order to make the above and other objects, features, and advantages of the present invention more clearly understood, the following preferred embodiments are described in detail with reference to the accompanying drawings.
Drawings
Fig. 1 is a schematic flowchart illustrating a sharing method according to an embodiment of the present invention;
fig. 2 is a first schematic view illustrating an effect of a sharing method according to a first embodiment of the present invention;
fig. 3 is a schematic diagram illustrating an effect of the sharing method according to the first embodiment of the present invention;
fig. 4 is a schematic structural diagram of a control device according to a second embodiment of the present invention.
Detailed Description
In the following description, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the terms "if," if, "and" if "may be interpreted contextually as" when … …, "or" upon "or" in response to a determination "or" in response to a detection. Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
The technical solution of the present invention is further described in detail with reference to the drawings and specific embodiments.
Fig. 1 is a schematic flow diagram of a sharing method according to an embodiment of the present invention, where the sharing method is applied to a control device, where the control device includes a vehicle-mounted terminal, a mobile terminal, a server, and the like. As shown in fig. 1, the sharing method of the present invention may include the following steps:
step S101: acquiring an instruction so as to determine a target object according to the instruction;
specifically, the target object may be determined according to a role included in the instruction by performing a logical analysis on the acquired instruction.
For example, if an instruction to "see me son at dryness" is obtained, the target object is determined to be the user's son; if an instruction of sending the photo to the child and his mother is obtained, determining that the target object is the wife of the user; if an instruction of transmitting the video to Zhang III is obtained, determining that the target object is Zhang III; and if the fact that the small magic king is reminded of watching television less is obtained and the small magic king is a nickname of the user to the son, determining that the target object is the son of the user.
Step S102: acquiring spatial distribution information of a space where the target object is located so as to determine the position of the target object in the space;
in one embodiment, a method for obtaining spatially distributed information includes the steps of:
acquiring signal characteristics of at least one second device; and
based on the signal characteristics, spatial distribution information and a position of the at least one second device in space are obtained.
The signal characteristics comprise signal types and signal parameters; the signal type comprises at least one of a Bluetooth signal, a wireless network signal, a communication signal and an ultra-wideband signal; the signal parameter includes at least one of signal strength, angle of arrival of the signal at the receiving point, time of arrival of the signal at the receiving point, and time difference of arrival of the signal at the receiving point. It is worth mentioning that not only the device position can be determined by using the signal characteristics, but also the device type can be determined by using the signal characteristics, for example, for bluetooth signals, bluetooth signal identifiers carried by each device are different, and the device type can be determined by further identifying the signal type to which the signal identifier belongs according to the bluetooth signal identifier.
In one embodiment, based on the signal characteristics, a plurality of position points and movement tracks can be obtained through the movement of at least one second device, so that a spatial layout is depicted, that is, spatial distribution information can be obtained; furthermore, the spatial distribution information is combined with the signal characteristics of the at least one second device, so that the position of the at least one second device in the space can be obtained; wherein, the second equipment can comprise a floor sweeping robot, a mobile air purifier and other mobile equipment.
Exemplarily, a plurality of position points of the sweeping robot can be obtained by continuously obtaining the signal characteristics of the sweeping robot in the moving process, and the moving track is generated according to the position points, so that the spatial layout is described, and the spatial distribution information is obtained. In addition, the sweeping robot continuously receives Bluetooth signals sent by other second equipment in the moving process, carries out identification recognition on the received Bluetooth signals, calculates the strength of the Bluetooth signals, and further combines with spatial distribution information to obtain the positions of the other second equipment in the space.
In other embodiments, based on the signal characteristics, the spatial distribution information and the position of the at least one second device in the space can be obtained through matching the usage area and the position of the at least one second device; wherein the second equipment comprises equipment for fixing the use area, such as an electric cooker, a range hood, a washing machine, a refrigerator and the like.
Exemplarily, the using area of cooking equipment such as an electric cooker, a range hood and the like is a kitchen, and the position of the cooking equipment obtained through the signal characteristics can be marked as the kitchen; the using area of the clothes washing equipment such as the electric clothes drying rod and the washing machine is a balcony, and the position of the clothes washing equipment can be marked as the balcony by acquiring the signal characteristics. It is worth mentioning that the use area of the second device may be set in advance, and the position of the second device obtained through the signal feature is directly related to the set value of the use area of the second device, so that the spatial distribution information and the position of the second device in the space may be obtained quickly.
In other embodiments, the method for obtaining spatial distribution information further comprises the following steps:
acquiring an image acquired by at least one second device; and
based on the acquired images, spatial distribution information and the position of the at least one second device in space are obtained.
Illustratively, a plurality of cameras may be utilized to acquire aerial images; and comparing and analyzing the acquired space images, and identifying characteristic articles included in the space images to obtain space distribution information and the position of at least one second device in the space. If the spatial image comprises characteristic furniture such as a sofa, a tea table and the like, the spatial image can be determined as a living room; if the space image comprises characteristic furniture such as a bed, a wardrobe and the like, the space can be determined as a bedroom; further, if the devices such as the television and the intelligent sound box are obtained from the spatial image of the living room, the devices such as the television and the intelligent sound box can be determined, and the position of the camera for obtaining the spatial image of the living room in the space is the living room; if the projector, the interphone and other equipment are acquired from the bedroom space image, the projector, the interphone and other equipment and the position of the camera acquiring the bedroom space image in the space can be determined to be the bedroom.
For example, a synchronous positioning and mapping technology (SLAM) may be adopted, and a visual robot (e.g., a sweeping robot with image capturing and positioning functions) starts from an unknown place of an unknown environment, continuously acquires an environment image during a movement process, positions and postures of the robot through map features (e.g., corners, pillars, etc.) analyzed from the environment image, and incrementally constructs a map according to the position of the robot, so as to acquire spatial distribution information and the position of the robot in a space. In addition, the visual robot continuously receives the Bluetooth signals sent by other second equipment in the moving process, carries out identification on the received Bluetooth signals, calculates the strength of the Bluetooth signals, and further combines with space distribution information to obtain the positions of the other second equipment in the space.
Further, the position of the target object in space is determined based on the following steps:
acquiring target object information via at least one second device; and
and determining the position of the target object based on the position of the second device in the space, wherein the second device acquires the target object information.
Wherein, the second equipment includes information acquisition equipment such as infrared inductor, camera, intelligent audio amplifier, wearable equipment, mobile device.
Optionally, the method of collecting target object information includes the following steps:
acquiring at least one object information through an infrared induction and/or voiceprint recognition and/or camera photographing analysis method and/or wearable equipment and/or mobile equipment, wherein the object information comprises at least one of a physiological characteristic and an identification number; and determining target object information among the at least one object information.
Wherein, the physiological characteristics are unique biological organism characteristics which are inherent to each person and include but not limited to voiceprint characteristics, facial characteristics, iris characteristics, fingerprint characteristics and the like; the identification number is an identification number for identifying the identity of the user, and includes, but is not limited to, an identification number (e.g., an identification number, a passport number), an account number (e.g., a social account number), a unique code (e.g., an identification number of a bound device), a private number (e.g., a cell phone number), and the like.
For example, one or more spaces where an object exists may be determined by an infrared sensor, then a camera in the corresponding space is controlled to take a picture of the object and perform facial recognition, and whether facial features matching with the facial features of the target object are included in the recognized facial features may be determined; and/or controlling the intelligent sound box in the corresponding space to collect object sound and perform voiceprint recognition, and judging whether the recognized voiceprint features contain voiceprint features matched with the voiceprint features of the target object. If the recognized facial features have facial features matched with the facial features of the target object and/or the recognized voiceprint features have voiceprint features matched with the voiceprint features of the target object, the target object can be determined to be located in the corresponding space where the camera and/or the smart speaker are located.
For example, the position of the wearable device and/or the mobile device in the space may be determined according to a bluetooth signal of the wearable device (e.g., a smart watch, a sports bracelet, etc.) and/or the mobile device (e.g., a mobile phone, a tablet computer, etc.), and then user information bound to the wearable device and/or the mobile device may be acquired, where the user information includes at least one of a fingerprint feature, an identity card number, and a login account number, and it is determined whether the bound user information matches a target object, and if the bound user information matches the target object, it may be determined that the target object is located in the space where the corresponding wearable device and/or the mobile terminal is located.
Step S103: if the instruction is a display instruction, presenting activity of the target object based on the position of the target object; and under the condition that the instruction is a sharing instruction, determining first equipment based on the position of the target object for presenting the shared content indicated by the sharing instruction.
Optionally, the activity of the target object is obtained, including but not limited to the following ways:
determining the activity condition of the target object based on the physiological data acquired by the second device;
determining the activity condition of the target object based on the use condition of the second device and/or the current time period;
determining the activity of the target object based on the image and/or video captured by the second device.
For example, if the wearable device acquires that the heart rate of the target object is lower than a preset value within a preset time, it may be determined that the target object is sleeping; if the target object is located in a kitchen and the current time is in a rice cooking time period and/or one or more kitchen appliances are in a working state (such as a range hood, an electric cooker and the like), determining that the target object is cooking; if the camera acquires the current Image and/or video of the target object, an Image annotation (Image capture) technology may be used to analyze the Image and/or video to determine the activity of the target object, such as: and analyzing the image and/or the video through a picture marking technology to obtain that the target object is playing the game.
Optionally, the manner of presenting the activity condition of the target object includes: text presentation and/or image presentation;
exemplarily, if it is only obtained that the current position of the target object a is a kitchen, and the current time is a cooking time period and/or one or more kitchen appliances are in a working state (such as a range hood, an electric cooker, etc.), the activity condition of the target object may be presented in a text form as "the target object a cooks in the kitchen", as shown in fig. 2; in other embodiments, if it is obtained that the current position of the target object B is a bedroom and it is obtained that the target object B sleeps in the bedroom, a sleeping screen of the target object B may be presented in a picture or video form, as shown in fig. 2, and/or an activity of presenting the target object in a text form is "the target object B sleeps in the bedroom".
In one embodiment, the first device comprises an audio device and an audiovisual device; determining a first device based on the position of the target object for presenting the shared content indicated by the sharing instruction, wherein the method comprises at least one of the following steps:
under the condition that the sharing instruction is to share the sharing content comprising voice and/or characters to the target object, displaying the sharing content through the audio equipment matched with the position of the target object;
under the condition that the sharing instruction is to share the sharing content comprising videos and/or pictures to the target object, displaying the sharing content through the audio and video equipment matched with the position of the target object; and
and under the condition that the sharing instruction is to share the sharing content comprising the video and/or the picture to the target object, converting the sharing content into audio sharing content, and displaying the audio sharing content through the audio equipment matched with the position of the target object.
Specifically, if the sharing instruction is to share the sharing content including the video and/or the picture to the target object, but the audio and video device matched with the position of the target object is not acquired, and only the audio device matched with the position of the target object is acquired, the sharing content of the video and/or the picture is converted into corresponding audio information, and the corresponding audio information is displayed through the audio device.
The method for converting the video sharing content into the corresponding audio information comprises the following steps:
extracting audio sharing content included in the video sharing content;
and voice broadcasting the audio sharing content.
In addition, the method for converting the picture sharing content into the corresponding audio information comprises any one of the following steps:
converting the picture sharing content into corresponding character sharing content, and carrying out voice broadcast on the character sharing content;
and aiming at the picture sharing content which cannot be converted into characters, the voice broadcast prompts the user that the sharing content is a picture.
Further, converting the picture sharing content into corresponding text sharing content, comprising the following steps:
recognizing characters embedded in the picture sharing content based on an Optical Character Recognition (OCR) technology:
1) acquiring picture sharing content;
2) preprocessing the picture sharing content;
optionally, preprocessing the picture by using an open library (opencv), including processes such as cropping, flipping, color conversion, picture enhancement, and the like; performing geometric normalization on the processed picture, and uniformly reshaping the picture into 48 × 48 pixels through a two-line interpolation algorithm;
3) detecting a text area in the picture sharing content based on building a convolutional neural network model (CNN), a recurrent neural network model (RNN) and the like;
4) and identifying the text in the text area based on building a convolutional neural network model (CNN), a recurrent neural network model (RNN), a time sequence classification algorithm (CTC) and the like, and generating the character sharing content.
Further, converting the picture sharing content into the corresponding text sharing content, further comprises the following steps:
based on a picture labeling technology, semantic texts contained in picture sharing contents are identified, and the semantic texts are used as character sharing contents.
The semantic text refers to a descriptive word generated according to an object in the picture and its expression, motion, surrounding environment, and the like. For example, the semantic text included in the picture shown in fig. 3 is: a road with mountain in the background is provided with a stop sign.
Therefore, the video and/or picture sharing content can be accurately converted into the corresponding audio information, when the audio and video equipment matched with the position of the target object is not acquired, the video and/or picture sharing content can be timely and accurately transmitted through the audio equipment, and the sharing experience is improved.
In other embodiments, determining the first device based on the position of the target object for presenting the shared content indicated by the sharing instruction further includes:
determining a first device matched with the position of the target object based on the position of the target object; and
when the distance between the position of the target object and the position of the matched first equipment is larger than a first threshold value, displaying prompt information through the first equipment;
and when the distance is smaller than or equal to the first threshold value, displaying the shared content through the first device.
The first equipment matched with the position of the target object comprises at least one of the first equipment closest to the position of the target object and the first equipment on the critical path of the target object. The target object critical path can be determined by combining the spatial distribution information and the motion trail of the target object, wherein the motion trail of the target object can be acquired through historical monitoring data, and the historical monitoring data comprises the motion trail of the target object with the highest frequency in different time periods.
For example, as shown in fig. 2, if the sharing instruction is to share the sharing content including the video and/or the picture to the target object a, the spatial distribution positions of the audio and video devices are a living room and a bedroom, and the current position of the target object a is in the kitchen, it may be determined that the audio and video device closest to the position of the target object is the audio and video device located in the living room.
For example, if the sharing instruction is to share the shared content including voice and/or text to the target object, the distributed locations of the audio devices in the space are a restaurant and a washroom, the current location of the target object is in a bedroom, and the motion trajectory of the target object in the time period is from the bedroom to the kitchen through the restaurant, the audio device located on the key path of the target object may be determined to be the audio device located in the restaurant.
Further, if the sharing instruction is to share the sharing content including the video and/or the picture to the target object, controlling the audio and video equipment matched with the position of the target object to broadcast the prompt information in a voice mode when the distance between the position of the target object and the position of the matched audio and video equipment is larger than a first threshold value; and when the distance between the position of the target object and the position of the matched audio and video equipment is smaller than or equal to a first threshold value, controlling the audio and video equipment matched with the position of the target object to display the shared content.
Exemplarily, if the sharing instruction is to share the sharing content including the video and/or the picture to the target object, the current position of the target object is a bedroom, the audio/video device closest to the position of the target object is located in the living room, and the distance between the position of the target object and the position of the audio/video device located in the living room is greater than a first threshold, controlling the audio/video device located in the living room to play a prompt message in a voice mode, such as "please view a picture sent by a son"; and if the target object moves from the bedroom to the living room after receiving the prompt message, and the distance between the position of the target object and the position of the audio and video device positioned in the living room is smaller than a first threshold value, controlling the audio and video device positioned in the living room to display the shared content comprising videos and/or pictures to the target object.
According to the sharing method provided by the embodiment of the invention, the activity condition of the target object and/or the sharing content is quickly presented to the target object by accurately determining the target object and the position of the target object, so that the sharing efficiency and accuracy are effectively improved, and quick and accurate sharing is realized.
Fig. 4 is a schematic structural diagram of a control device according to a second embodiment of the present invention. As shown in fig. 4, the control apparatus of this embodiment includes: a processor 110, a memory 111 and a computer program 112 stored in said memory 111 and executable on said processor 110. The processor 110 executes the computer program 112 to implement the steps in the above-mentioned sharing method embodiments, such as the steps S101 to S103 shown in fig. 1.
The control device may include, but is not limited to, a processor 110, a memory 111. It will be appreciated by those skilled in the art that fig. 4 is merely an example of a control device and does not constitute a limitation of a control device, and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the control device may also include input output devices, network access devices, buses, etc.
The Processor 110 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 111 may be an internal storage unit of the control device, such as a hard disk or a memory of the control device. The memory 111 may also be an external storage device of the control device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the control device. Further, the memory 111 may also include both an internal storage unit of the control device and an external storage device. The memory 111 is used for storing the computer program and other programs and data required by the control device. The memory 111 may also be used to temporarily store data that has been output or is to be output.
The present invention also provides a computer storage medium having a computer program stored thereon, which, when executed by a processor, implements the steps of the sharing method as described above.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative methods and steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A sharing method, comprising the steps of:
acquiring an instruction so as to determine a target object according to the instruction;
acquiring spatial distribution information of a space where the target object is located so as to determine the position of the target object in the space; and
if the instruction is a display instruction, presenting activity of the target object based on the position of the target object;
and under the condition that the instruction is a sharing instruction, determining first equipment based on the position of the target object for presenting the shared content indicated by the sharing instruction.
2. The sharing method according to claim 1, wherein the method for obtaining the spatial distribution information comprises the following steps:
acquiring signal characteristics of at least one second device; and
and acquiring the spatial distribution information and the position of the at least one second device in the space based on the signal characteristics.
3. The sharing method according to claim 1, wherein the method of obtaining the spatial distribution information further comprises the steps of:
acquiring an image acquired by at least one second device; and
based on the image, the spatial distribution information and the position of the at least one second device in the space are obtained.
4. The sharing method of claim 2 or 3, the position of the target object in the space being determined based on:
acquiring target object information via the at least one second device; and
determining the position of the target object based on the position of the second device in the space, wherein the position of the target object information is acquired.
5. The sharing method according to claim 4, wherein the method for collecting the information of the target object comprises the following steps:
acquiring at least one object information through an infrared induction and/or voiceprint recognition and/or camera photographing analysis method and/or wearable equipment and/or mobile equipment, wherein the object information comprises at least one of a physiological characteristic and an identification number; and
determining the target object information among the at least one object information.
6. The sharing method of claim 1, wherein the manner of presenting the activity of the target object comprises: text presentation and/or image presentation.
7. The sharing method of claim 1, the first device comprising an audio device and an audiovisual device, the determining a first device based on the location of the target object for presenting the shared content indicated by the sharing instruction comprising at least one of:
under the condition that the sharing instruction is that the sharing content comprising voice and/or characters is shared to the target object, displaying the sharing content through the audio equipment matched with the position of the target object;
under the condition that the sharing instruction is to share the sharing content comprising videos and/or pictures to the target object, displaying the sharing content through the audio and video equipment matched with the position of the target object; and
and under the condition that the sharing instruction is to share the sharing content comprising videos and/or pictures to the target object, converting the sharing content into audio sharing content, and displaying the audio sharing content through the audio equipment matched with the position of the target object.
8. The sharing method according to claim 1, wherein the determining of the first device based on the position of the target object for presenting the shared content indicated by the sharing instruction further comprises:
determining the first device matching the location of the target object based on the location of the target object; and
when the distance between the position of the target object and the position of the first device is larger than a first threshold value, displaying prompt information through the first device;
when the distance is less than or equal to the first threshold, presenting the shared content via the first device.
9. A control device comprising a memory, at least one processor and a computer program stored in the memory and executable on the at least one processor, characterized in that the at least one processor implements the steps of the sharing method according to any one of claims 1 to 8 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the sharing method according to any one of claims 1 to 8.
CN202110290205.8A 2021-03-18 2021-03-18 Sharing method, control device and computer-readable storage medium Pending CN115118536A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110290205.8A CN115118536A (en) 2021-03-18 2021-03-18 Sharing method, control device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110290205.8A CN115118536A (en) 2021-03-18 2021-03-18 Sharing method, control device and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN115118536A true CN115118536A (en) 2022-09-27

Family

ID=83323534

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110290205.8A Pending CN115118536A (en) 2021-03-18 2021-03-18 Sharing method, control device and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN115118536A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006236195A (en) * 2005-02-28 2006-09-07 Casio Comput Co Ltd Presentation control device
US7484008B1 (en) * 1999-10-06 2009-01-27 Borgia/Cummins, Llc Apparatus for vehicle internetworks
WO2017128675A1 (en) * 2016-01-29 2017-08-03 宇龙计算机通信科技(深圳)有限公司 Information sharing method and information sharing apparatus for wearable device
CN108055558A (en) * 2017-12-27 2018-05-18 浙江大华技术股份有限公司 A kind of on-screen display system and method
EP3445056A2 (en) * 2017-05-16 2019-02-20 Apple Inc. Methods and interfaces for home media control
CN112383500A (en) * 2020-06-15 2021-02-19 岭博科技(北京)有限公司 Method and system for controlling access request related to screen projection equipment
CN112437190A (en) * 2019-08-08 2021-03-02 华为技术有限公司 Data sharing method, graphical user interface, related device and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7484008B1 (en) * 1999-10-06 2009-01-27 Borgia/Cummins, Llc Apparatus for vehicle internetworks
JP2006236195A (en) * 2005-02-28 2006-09-07 Casio Comput Co Ltd Presentation control device
WO2017128675A1 (en) * 2016-01-29 2017-08-03 宇龙计算机通信科技(深圳)有限公司 Information sharing method and information sharing apparatus for wearable device
EP3445056A2 (en) * 2017-05-16 2019-02-20 Apple Inc. Methods and interfaces for home media control
CN108055558A (en) * 2017-12-27 2018-05-18 浙江大华技术股份有限公司 A kind of on-screen display system and method
CN112437190A (en) * 2019-08-08 2021-03-02 华为技术有限公司 Data sharing method, graphical user interface, related device and system
CN112383500A (en) * 2020-06-15 2021-02-19 岭博科技(北京)有限公司 Method and system for controlling access request related to screen projection equipment

Similar Documents

Publication Publication Date Title
CN109085966B (en) Three-dimensional display system and method based on cloud computing
Das et al. Toyota smarthome: Real-world activities of daily living
Betancourt et al. The evolution of first person vision methods: A survey
US20180349084A1 (en) Information processing device, information processing method, and program
CN103140862B (en) User interface system and operational approach thereof
WO2019085585A1 (en) Device control processing method and apparatus
CN107728482A (en) Control system, control process method and device
CN108198130B (en) Image processing method, image processing device, storage medium and electronic equipment
CN102577367A (en) Time shifted video communications
CN107654406B (en) Fan air supply control device, fan air supply control method and device
CN102301379A (en) Method For Controlling And Requesting Information From Displaying Multimedia
US11659144B1 (en) Security video data processing systems and methods
CN112312215B (en) Startup content recommendation method based on user identification, smart television and storage medium
WO2020151255A1 (en) Display control system and method based on mobile terminal
CN109581886B (en) Equipment control method, device, system and storage medium
JP2015511343A (en) User recognition method and system
CN111340848A (en) Object tracking method, system, device and medium for target area
CN114821236A (en) Smart home environment sensing method, system, storage medium and electronic device
JP6941950B2 (en) Image providing system, image providing method, and image providing program
WO2020151430A1 (en) Air imaging system and implementation method therefor
CN106951857A (en) A kind of identity based reminding method, apparatus and system
CN111402096A (en) Online teaching quality management method, system, equipment and medium
CN115118536A (en) Sharing method, control device and computer-readable storage medium
US10838741B2 (en) Information processing device, information processing method, and program
CN106997449A (en) Robot and face identification method with face identification functions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination