CN113268014A - Carrier, facility control method, equipment, system and storage medium - Google Patents

Carrier, facility control method, equipment, system and storage medium Download PDF

Info

Publication number
CN113268014A
CN113268014A CN202010093466.6A CN202010093466A CN113268014A CN 113268014 A CN113268014 A CN 113268014A CN 202010093466 A CN202010093466 A CN 202010093466A CN 113268014 A CN113268014 A CN 113268014A
Authority
CN
China
Prior art keywords
users
behavior
facility
user
adjustment parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010093466.6A
Other languages
Chinese (zh)
Inventor
戴继松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010093466.6A priority Critical patent/CN113268014A/en
Publication of CN113268014A publication Critical patent/CN113268014A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • G05B19/0423Input/output
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/25Pc structure of the system
    • G05B2219/25257Microcontroller

Abstract

The embodiment of the application provides a carrier, a facility control method, equipment, a system and a storage medium. Wherein the carrier control method comprises: determining a sign difference between at least two users in the same behavior scene; respectively determining adjustment parameter values corresponding to behavior carriers used by the at least two users according to the sign difference; adjusting the working states of the behavior carriers according to the adjustment parameter values so as to enable the behavior states of the at least two users to be mutually adaptive; wherein, the working state of the behavior carrier influences the behavior state of the user carried by the behavior carrier. In the embodiment of the application, the behavior states of the users in the same behavior scene can be mutually adapted by adjusting the working state of the behavior carrier, and the users do not have the feeling of inequality or not being respected in the behavior scene any more.

Description

Carrier, facility control method, equipment, system and storage medium
Technical Field
The present application relates to the field of internet of things technology, and in particular, to a carrier, a facility control method, a device, a system, and a storage medium.
Background
The current independent small space has more and more application scenes, such as a vocal bar at the street, a leisure bar and the like. The independent small spaces can provide private spaces for two left and right users, and the users can conveniently play entertainment, leisure, conference or communication and the like in the relatively closed and quiet space.
Currently, due to the limitations of the facilities in these separate small spaces, in many cases, the user may feel not comfortable enough, not equal, or not respected in the separate small spaces, resulting in a poor user experience.
Disclosure of Invention
Aspects of the present disclosure provide a carrier, a facility control method, an apparatus, a system, and a storage medium to improve flexibility of facilities in a space and improve user experience.
The embodiment of the application provides a carrier control method, which comprises the following steps:
determining a sign difference between at least two users in the same behavior scene;
respectively determining adjustment parameter values corresponding to behavior carriers used by the at least two users according to the sign difference;
adjusting the working states of the behavior carriers according to the adjustment parameter values so as to enable the behavior states of the at least two users to be mutually adaptive;
wherein the working state of the behavior carrier influences the behavior state of the user carried by the behavior carrier.
An embodiment of the present application further provides a facility control method, including:
determining a sign difference between at least two users using the same target facility;
determining an adjustment parameter value corresponding to the target facility according to the sign difference between the at least two users;
and adjusting the working state of the target facility according to the adjustment parameter value so as to adapt to the physical signs of the at least two users.
An embodiment of the present application further provides a facility control method, including:
acquiring sign information of a user entering a target space;
determining an adjustment parameter value of at least one facility in the target space according to the physical sign information of the user;
and respectively adjusting the working state of the at least one facility according to the adjustment parameter value of the at least one facility so as to adapt to the physical sign information of the user.
An embodiment of the present application further provides a control system, including: a controller and at least two behavior vectors;
the at least two behavior carriers are used for bearing users;
the controller is used for determining sign differences between at least two users in the same behavior scene; respectively determining adjustment parameter values corresponding to behavior carriers used by the at least two users according to the sign difference; adjusting the working states of the behavior carriers according to the adjustment parameter values so as to enable the behavior states of the at least two users to be mutually adaptive;
wherein the working state of the behavior carrier influences the behavior state of the user carried by the behavior carrier.
The embodiment of the application also provides an intelligent facility, which comprises a facility body, a processor and a driving assembly;
the processor is used for determining sign differences among users in a behavior scene containing the users carried by the intelligent facility; determining an adjusting parameter value according to the sign difference;
and adjusting the working state of the intelligent facility by using the driving component according to the adjustment parameter value so as to enable the behavior state of the user borne by the intelligent facility to be matched with the behavior states of other users in the behavior scene.
An embodiment of the present application further provides a control system, including: a controller and a target facility;
the target facility is used for providing facility service for the user;
the controller is configured to determine a sign difference between at least two users using the same target facility; determining an adjustment parameter value corresponding to the target facility according to the sign difference between the at least two users; and adjusting the working state of the target facility according to the adjustment parameter value so as to adapt to the physical signs of the at least two users.
The embodiment of the application also provides an intelligent facility, which comprises a facility body, a processor and a driving assembly;
the processor is configured to determine a sign difference between at least two users using the smart appliance; determining an adjusting parameter value according to the sign difference;
and adjusting the working state of the intelligent facility by utilizing the driving component according to the adjustment parameter value so as to adapt to the physical signs of the at least two users.
An embodiment of the present application further provides a control system, including: a controller and at least one facility located in a target space;
the at least one facility is used for providing facility service for the user;
the controller is used for acquiring sign information of a user entering a target space; determining an adjustment parameter value of the at least one facility according to the physical sign information of the user; and respectively adjusting the working state of the at least one facility according to the adjustment parameter value of the at least one facility so as to adapt to the physical sign information of the user.
The embodiment of the application also provides a computing device, which comprises a memory and a processor;
the memory is to store one or more computer instructions;
the processor is coupled with the memory for executing the one or more computer instructions for:
determining a sign difference between at least two users in the same behavior scene;
respectively determining adjustment parameter values corresponding to behavior carriers used by the at least two users according to the sign difference;
adjusting the working states of the behavior carriers according to the adjustment parameter values so as to enable the behavior states of the at least two users to be mutually adaptive;
wherein the working state of the behavior carrier influences the behavior state of the user carried by the behavior carrier.
The embodiment of the application also provides a computing device, which comprises a memory and a processor;
the memory is to store one or more computer instructions;
the processor is coupled with the memory for executing the one or more computer instructions for:
determining a sign difference between at least two users using the same target facility;
determining an adjustment parameter value corresponding to the target facility according to the sign difference between the at least two users;
and adjusting the working state of the target facility according to the adjustment parameter value so as to adapt to the physical signs of the at least two users.
The embodiment of the application also provides a computing device, which comprises a memory and a processor;
the memory is to store one or more computer instructions;
the processor is coupled with the memory for executing the one or more computer instructions for:
acquiring sign information of a user entering a target space;
determining an adjustment parameter value of at least one facility in the target space according to the physical sign information of the user;
and respectively adjusting the working state of the at least one facility according to the adjustment parameter value of the at least one facility so as to adapt to the physical sign information of the user.
Embodiments of the present application also provide a computer-readable storage medium storing computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the aforementioned control method.
In the embodiment of the application, the sign difference between different users in the same behavior scene can be determined, and the working states of behavior carriers used by different users can be adjusted according to the sign difference between different users, so that the behavior states of the users in the behavior scene are mutually adaptive. Accordingly, in the embodiment of the application, the behavior states of the users in the same behavior scene are mutually adaptive, and the users do not have the feeling of inequality or not being respected in the behavior scene any more.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1a is a schematic diagram of a control system according to an exemplary embodiment of the present disclosure;
FIG. 1b is a schematic diagram of an intelligent facility according to an exemplary embodiment of the present application;
FIGS. 2a-2c are schematic diagrams of an application scenario provided in an exemplary embodiment of the present application;
FIG. 3a is a schematic block diagram of another control system provided in accordance with another exemplary embodiment of the present application;
FIG. 3b is a schematic diagram of a smart facility according to another exemplary embodiment of the present application;
4 a-4 c are schematic diagrams of another application scenario provided in another exemplary embodiment of the present application;
FIG. 5 is a schematic block diagram of yet another control system provided in accordance with yet another exemplary embodiment of the present application;
fig. 6 is a schematic flowchart of a carrier control method according to another exemplary embodiment of the present application;
FIG. 7 is a schematic block diagram of a computing device according to yet another exemplary embodiment of the present application;
FIG. 8 is a schematic flow chart diagram of another facility control method provided in accordance with yet another exemplary embodiment of the present application;
FIG. 9 is a schematic block diagram of another computing device provided in accordance with yet another exemplary embodiment of the present application;
FIG. 10 is a schematic flow chart diagram of yet another facility control method provided in accordance with yet another exemplary embodiment of the present application;
fig. 11 is a schematic structural diagram of another computing device according to another exemplary embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
At present, limited by facilities in the independent small space, in many cases, a user may feel uncomfortable, inequality or not respected in the independent small space, resulting in poor user experience. In view of these technical problems, the embodiments of the present application provide a solution, and the basic idea is: the sign difference between different users in the same behavior scene can be determined, and the working states of behavior carriers used by different users can be adjusted according to the sign difference between different users, so that the behavior states of the users in the behavior scene are mutually adaptive. Accordingly, in the embodiment of the application, the behavior states of the users in the same behavior scene are mutually adaptive, and the users do not have the feeling of inequality or not being respected in the behavior scene any more.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1a is a schematic structural diagram of a control system according to an exemplary embodiment of the present application. As shown in fig. 1a, the control system comprises a controller 10 and at least two activity carriers 20.
The control system provided by the embodiment can be applied to independent small space scenes, such as: vocal bars, leisure bars, conference bars, dining bars, etc. Of course, the method can also be applied to space scenes with other specifications, such as meeting rooms, KTV boxes, restaurants and the like. The present embodiment does not limit the application scenario.
The control system provided by the embodiment mainly provides a solution for adapting the behavior states of at least two users to each other aiming at the situation that at least two users participate in the same behavior scene.
In this embodiment, the action scene may be understood as an event process in which a user participates. The user may have active behavior or passive behavior in the behavior scene. In this embodiment, the type of the behavior scene is not limited, and the behavior scene may be a chat scene, a dining scene, a conference scene, an entertainment scene, and the like. The at least two users participating in the same behavior scenario means that the at least two users participate in the same event together. For example, at least two users chat together, at least two users eat together, at least two users join a videoconference together, at least two users sing a song with the station, and so forth. The present embodiment is not limited thereto.
For the controller 10, a sign difference between at least two users in the same behavioral scenario may be determined. Wherein, the physical signs of the user refer to physical features of the user, including but not limited to height, weight, sex, and the like. The sign difference can be understood as the difference of different users under the same sign, such as height difference, weight difference, and the like.
In this embodiment, the controller 10 may respectively determine, according to the sign difference between at least two users, an adjustment parameter value corresponding to the behavior carrier 20 used by each of the at least two users, where the adjustment parameter value is used to define an adjustment degree of the behavior carrier; and adjusting the working state of each behavior carrier 20 according to the adjustment parameter value so as to adapt the behavior states of the at least two users to each other.
In practical applications, the operating state of only part of the behavior carrier 20 may be adjusted. Accordingly, in this embodiment, the adjustment parameter value may include 0, and for a behavior carrier 20 that does not need to adjust the operating state, the adjustment parameter value for such behavior carrier 20 may be configured as 0. And under the condition that the adjustment parameter value corresponding to the behavior carrier is 0, the working state of the behavior carrier is not adjusted.
The behavior carrier 20 is understood to be a facility for carrying users and providing support for user behaviors. The operation state of the behavior carrier 20 includes, but is not limited to, a height state, a hardness state, an inclination state, and the like. In the case that the working state of the behavior carrier 20 changes, the behavior state of the user carried by the behavior carrier in the behavior scene may also change accordingly.
For example, in the case that at least two users chat together, a chair used by each user can be used as the behavior carrier 20, the height state of the chair can be adjusted, and when the height of the chair changes, the chat view angle (as the behavior state) of the user changes accordingly.
For another example, when at least two users have a meal together, a chair used by each user may be used as the behavior carrier 20, a table used may also be used as the behavior carrier 20, the height states of the chair and the table may be adjusted, and when the height of the chair or the table is changed, the dining posture (as the behavior state) of the user is changed accordingly.
In this embodiment, the behavior carriers 20 in different behavior scenarios may not be identical. The activity carrier 20 may be a chair, table, sofa, or lift table, among others. Of course, this is merely exemplary, and the behavior carrier 20 in the present embodiment is not limited thereto.
In practical applications, the difference in physical signs between users may cause the users to feel unequal or not be respected in the process of participating in the behavior scene, and especially in the case that the environmental space corresponding to the behavior scene is relatively small, the behavior distance between the users is relatively close, and the feeling is more intense.
In this embodiment, the controller 10 can adjust the working states of the behavior carriers 20 used by different users by using the sign differences between the different users as an adjustment basis to adjust the behavior states of the different users, so as to improve the inequality or unsure feeling caused by the sign differences of the different users in the behavior scene.
For example, when two users are chatting at a close distance, the difference in height will result in a pitching communication perspective, which may cause a oppressive feeling, an uneven feeling, etc. to the shorter user. The controller 10 can adjust the height of the chairs used by the two parties, so that the chair used by the shorter party is higher than the chair used by the higher party, and the communication visual angles of the two parties tend to be consistent, thereby improving the oppression, the inequality feeling and the like.
For another example, a weight difference between two users in a short-range chat may result in a difference in the degree of seat subsidence, which may be uncomfortable or shame to the larger weight party. The controller 10 can adjust the hardness of the seats used by both parties so that the seat used by the heavier party is harder than the seat used by the lighter party, and reduce the degree of sinking of the seat used by the heavier party to make the seating postures of both parties to be consistent, thereby improving the aforementioned shame and the like.
For example, when two users have a meal together, the difference in height will cause differences in the posture of the meal, and the shorter one will feel oppressed and uneven, while the higher one may feel the desk too short and need to bend over for a meal, resulting in poor comfort of the meal. The controller 10 can adjust the height of the chairs used by both parties, and also can adjust the height of the table shared by both parties, so that the chair used by the shorter party is higher than the chair used by the higher party, and the height of the table is suitable for the adjusted dining height of both parties, so that the dining postures of both parties tend to be consistent, and the oppression, unevenness and the like are improved.
The behavioral state of the user may be understood as a physical posture used for participating in a behavioral scenario, such as a standing posture, a sitting posture, a line-of-sight angle, a head rotation angle, an arm posture, and the like. The behavior states of the two users are adapted to each other, which can be understood as improving the matching degree between the behavior states caused by the physical signs. In practical applications, the behavior states may not be completely matched due to limitations such as adjustment limits of the behavior carrier 20 and ergonomic requirements, and the present embodiment is not intended to force complete matching between the behavior states, but is intended to improve the matching degree between the behavior states as much as possible when the above limitations are satisfied.
For example, the behavior state adaptation may be adaptation of communication perspectives between different users, or may be that heads of different users all enter a shooting range of the same video conference capturing device.
In this embodiment, the sign difference between different users in the same behavior scene can be determined, and the working states of the behavior carriers 20 used by different users can be adjusted according to the sign difference between different users, so that the behavior states of the users in the behavior scene are adapted to each other. Accordingly, in the embodiment of the application, the behavior states of the users in the same behavior scene are mutually adaptive, and the users do not have the feeling of inequality or not being respected in the behavior scene any more.
In the above or following embodiments, the controller 10 can detect the respective sign information of at least two users by using the detection component, and determine the sign difference between at least two users according to the respective sign information of at least two users.
Wherein, the detecting component may be an image collecting component, and the controller 10 may collect images of at least two users by using an image collecting device; extracting respective sign information of at least two users from the images of the at least two users; and determining the sign difference between the at least two users according to the respective sign information of the at least two users.
In this embodiment, the image capturing device captures images of at least two users as needed.
In one implementation, an image capture device may be deployed at an entrance to an environmental space corresponding to an action scene. The entrance is a necessary place for at least two users participating in the behavior scene, so that the image acquisition device deployed at the entrance can successfully acquire the images of the at least two users.
In this implementation, the image acquisition device can acquire the image containing a single user at a single time, and certainly, if a plurality of users pass through the entrance simultaneously, the image acquisition device can acquire the image containing a plurality of users at a single time. And is not limited herein.
In practical application, a reference object may be set in the acquisition range of the image acquisition device, the images of at least two users acquired by the image acquisition device include the reference object, and the controller 10 may perform image analysis on the images acquired by the image acquisition device, and determine the sign information of each user according to the reference object.
For example, a scale may be provided at the entrance, an image including the user and the scale may be captured by the image capturing device when the user passes through the entrance, and the controller 10 may analyze which scale of the scale the crown of the user is flush with, thereby determining the height information of the user.
Of course, different references may be deployed for different signs, and references are not necessary, and some signs may not require a reference, e.g., gender, etc.
In another implementation, the image capture device may be deployed in a location where images containing at least two users can be captured. For example, in a vocal bar, it may be deployed in front of a plurality of seats.
In an initial stage in which at least two users are loaded on the respective behavior carriers 20, the image capturing device may capture an image containing at least two users; wherein, at least two users are loaded in the initial stage on the behavior carrier 20 corresponding to each user, and each behavior carrier 20 is in the initial working state.
In this implementation, the initial stage of the user load on the behavior carrier 20 refers to a stage before the controller 10 adjusts the behavior carrier 20 according to the sign difference, where the user load is on the behavior carrier 20, and the behavior carrier 20 in this stage can be maintained in the initial working state. In order to facilitate the controller 10 to analyze the characteristic differences, the initial operating states of the plurality of behavior carriers 20 may be kept consistent.
Also for example, in a vocal bar, multiple seats may be at the same initial height when no one is seated. When different users are seated, the plurality of seats may maintain the initial height, awaiting a subsequent adjustment action by the controller 10. The image capturing device may capture images containing the seated users.
In this implementation, the reference object may also be deployed within the acquisition range of the image acquisition device, so that the image acquired by the image acquisition device includes at least two users and the reference object. The controller 10 can determine the sign information of each user based on the reference object, and further analyze the sign difference between different users. Of course, the controller 10 can also directly compare the signs between different users in the image to determine the sign difference between different users.
In this implementation, since each user is already loaded on the behavior carrier 20, the determined sign difference more accurately reflects the difference of behavior states that will be generated when different users participate in the behavior scene. This may provide a more accurate basis for the controller 10 to determine the adjustment parameter values.
Of course, in this embodiment, the detection component may further include other types of devices, for example, the detection component may include an infrared sensing device, the infrared sensing device may collect physical sign information of the user, such as height or weight, and different types of devices may be used to collect different physical sign information. The detection unit in this embodiment is not limited thereto.
In the above or below embodiments, before extracting the respective sign information of the at least two users from the images of the at least two users, the controller 10 may further perform identification on the at least two users to determine whether there is a registered user among the at least two users; aiming at a registered user, extracting sign information from user information corresponding to the registered user; and aiming at the non-registered users, performing operation of extracting respective physical sign information of at least two users from the images of the at least two users.
In this embodiment, a pre-registration scheme may be provided. The pre-registration scheme is particularly suitable for a behavior scene with a relatively fixed user group. Such as conference bars, conference rooms, etc. within an enterprise, the user groups are employees within the enterprise. Each user can register before participating in the behavior scene, and submit own user information such as identity information, physical sign information and the like. The identity information may be a face image, fingerprint data, and the like.
Accordingly, in the present embodiment, the controller 10 may identify at least two users to determine a registered user of the at least two users. The control system may be disposed with an identification device, such as a face recognition device, a fingerprint collection device, and the like, and the controller 10 may identify the user by using the identification device, so as to determine the registered user of the at least two users.
For the registered user, the sign information can be extracted from the user information of the registered user. In this way, the controller 10 may not need to perform the operation of extracting the vital sign information of the at least two users from the images of the at least two users as mentioned in the foregoing embodiment.
For the non-registered user, the controller 10 may perform the operation of extracting the sign information of the at least two users from the images of the at least two users as mentioned in the foregoing embodiment to obtain the sign information of the at least two users.
Of course, the identification process mentioned in this embodiment is not necessary. For a behavior scene in which the user group is not fixed, on one hand, the scale of the user group is too large, and on the other hand, the number of times that the user participates in the behavior scene is very low, and in consideration of these factors, the process of identity recognition provided in this embodiment may not be executed any more.
In the above or below embodiments, the controller 10 may further determine the behavior carrier 20 used by each of the at least two users before determining the corresponding adjustment parameter value of the behavior carrier 20 used by each of the at least two users.
An exemplary implementation manner provided by the present embodiment is as follows: in a preliminary stage of the behavioral scenario, determining the relative positions between at least two users and each of the behavioral carriers 20; determining the behavior carrier 20 selected by each of the at least two users according to the relative position between the at least two users and each behavior carrier 20; wherein at least two users are respectively in position to the selected activity carrier 20 in a preliminary phase.
In this implementation, in a preliminary phase of the behavioral scenario, each user is seated to their selected behavioral carrier 20. Here, the in-position means entering into a use space of the behavior carrier 20 or being loaded on the behavior carrier 20.
In order to determine the relative positions between the at least two users and the respective behavior carriers 20, an image capturing device may be deployed in the control system, for example, the image capturing device in the foregoing embodiment may be reused, and of course, a dedicated image capturing device may be added to capture an image reflecting the relative positions between the at least two users and the respective behavior carriers 20 in a preliminary stage of the behavior scene.
The controller 10 may perform a graphical analysis of the images acquired during the preliminary stage to determine the relative positions of the at least two users and the respective activity carriers 20.
In practical applications, if the acquired image does not include the behavior carrier 20, the positions of the at least two users can be determined according to the relative positions of the at least two users and the reference object by deploying the reference object. And determining the relative positions between the at least two users and the various behavioural carriers 20 according to the respective positions of the at least two users and the deployment positions of the various behavioural carriers 20. In addition, the locations of at least two users and the deployment locations of each behavior carrier 20 employ the same location registration reference. For example, the same coordinate system or the same reference object is used.
If the acquired image includes at least two users and various behavior carriers 20, the controller 10 can directly analyze the relative positions of the at least two users and the various behavior carriers 20 from the image.
Based on the determination of the relative positions between the at least two users and the behavior carriers 20, the controller 10 may obtain the association relationship between the at least two users and the behavior carriers 20, so as to determine the behavior carriers 20 used by the at least two users in the behavior scene. Here, the behavior carrier 20 selected by the user in the preliminary stage is considered to be the behavior carrier 20 used by the user in the formal stage of the behavior scene.
If a user who replaces the behavior carrier 20 occurs in the formal phase of the behavior scene, for such a user, the controller 10 may apply the adjustment parameter value determined according to the sign difference between such a user and another user to the replaced behavior carrier 20, so as to adjust the working state of the replaced behavior carrier 20. The original behavior carrier 20 can be controlled to return to the initial working state to wait for the next user to appear.
Of course, the above-described implementation is only exemplary, and in this embodiment, other implementations may also be adopted to determine the behavior carrier 20 used by each of the at least two users. For example, a behavior carrier 20 is specified for the user in a preliminary stage of the behavior scenario, and the user is guided into position to the behavior carrier 20 specified for him. The behavior carrier 20 used by each of at least two users can be recorded in the process of specifying the behavior carrier 20. The present embodiments are in no way limited to these exemplary implementations.
In the above or below embodiments, the types of adjustment parameters that can be adjusted by different types of behavior carriers 20 may be the same or different. Types of tuning parameters include, but are not limited to, height, stiffness, inclination, and the like. For example, the type of adjustment parameter corresponding to a chair may include a height, the type of adjustment parameter corresponding to a sofa may include a firmness, and the type of adjustment parameter corresponding to a lift may also include a height.
Therefore, in order to save processing resources, the controller 10 may focus on different dimensional sign differences for different types of behavior carriers 20, so as to determine the corresponding adjustment parameter values of the behavior carriers 20. For example, for a chair, the difference in height between different users may be of significant concern, while for a sofa, the difference in weight between different users may be of significant concern. Of course, this embodiment is not limited thereto.
In this embodiment, based on the types of the adjustment parameters that can be adjusted by the different types of behavior carriers 20, in an exemplary implementation, the controller 10 may obtain respective sign information of at least two users, and determine ergonomic requirements corresponding to the at least two users according to the sign information; respectively determining the numerical ranges of the adjustment parameters meeting the ergonomic requirements of the users of the behavior carriers 20 used by the at least two users; in order to adapt the behavior states of at least two users to each other as a target, a target adjustment parameter value is selected from the value range of the adjustment parameter corresponding to the behavior carrier 20 used by each of the at least two users, and the target adjustment parameter value is used as the adjustment parameter value corresponding to the behavior carrier 20 used by each of the at least two users.
In this implementation, a range of values for the adjustment parameters that meet the ergonomic requirements of the user of the behavior carrier 20 is set for each behavior carrier. The controller 10 will select a suitable value within the range of values of the adjustment parameter corresponding to each activity carrier 20 with the goal of adapting the activity status between at least two users to each other.
This implementation can reduce the difference in behavior state between at least two users due to the difference in physical signs to the greatest extent on the premise of ensuring that the working state of the behavior carrier 20 conforms to ergonomics. As explained above for the mutual adaptation of the behavior states, in this implementation, the consistency of the behavior states among different users is not excessively pursued, but rather the ergonomic requirements are taken into account, which ensures the comfort of using the behavior carrier 20 by a single user, which can offset some of the aforementioned negative feelings.
Of course, in this embodiment, other implementation manners may also be adopted to determine the adjustment parameter value corresponding to each behavior carrier 20. Moreover, the processing logic adopted by the controller 10 in determining the adjustment parameter value corresponding to each behavior carrier 20 can be flexibly adjusted according to factors such as different behavior carrier 20 types, concerned physical sign types, behavior characteristics in a behavior scene, and the like. This embodiment is not limited to this.
In the above or below described embodiments, each behavior carrier 20 may be associated with a respective drive assembly. The controller 10 may generate control commands corresponding to the behavior carriers 20 according to adjustment parameter values corresponding to the behavior carriers 20 used by at least two users respectively; and respectively sending the control commands corresponding to the behavior carriers 20 to the driving components associated with the behavior carriers 20, so as to control the driving components to adjust the working states of the behavior carriers 20 according to the adjustment parameter values corresponding to the behavior carriers 20.
The components comprised by the drive assembly may not be identical for different types of activity carriers 20, and different adjustment objectives.
For example, for the adjustment target of the height adjustment, the driving assembly may include a PLC (programmable logic controller 10) and a cylinder, the cylinder may be fixedly connected with the behavior carrier 20, and the PLC may be communicatively connected with the controller 10. The PLC may respond to the control command sent by the controller 10, and drive the air cylinder to move according to the adjustment parameter value in the control command, so as to drive the behavior carrier 20 to adjust the height.
Of course, this is merely exemplary, and in this embodiment, the implementation manner of the driving assembly is not limited. The driving assembly can be deployed according to actual requirements.
Fig. 2a to fig. 2c are schematic diagrams of an application scenario provided in an exemplary embodiment of the present application. In fig. 2a-2c, two users with different heights enter the same scene space for chatting.
As shown in fig. 2a, an image capturing device is disposed at an entrance of the scene space, and images of two users can be captured and provided to the controller when the two users enter the scene space. The controller may determine a height difference between the two users based on the images of the two users.
As shown in fig. 2b, in the preparation phase of the chat, two users are seated to the selected chairs, respectively. A taller user selects the chair on the left and a shorter user selects the chair on the right. In addition, in the preliminary stage, the two chairs are at the same initial height.
As shown in fig. 2c, the controller may determine the adjustment parameter values of the two chairs, respectively, with the chat view angles of the two users adapted to each other as a target according to the height difference between the two users. For example, two users have a height difference of 30cm, the controller may determine that the left chair is lowered by 15cm and the right chair is raised by 15 cm. The controller may issue control commands to the drive assemblies associated with each of the two chairs and carry the adjustment parameter values in the control commands. The drive assembly will adjust the height of the chair according to the control commands.
Before the driving assembly adjusts the height of the chair, adjustment prompt information can be sent to the user, for example, prompt words or prompt voice can be sent out through a display screen or an audio device in the scene space, so that the user is prompted to prepare for passive movement.
As shown in fig. 2c, the chair on the left side will be adjusted down by 15cm, while the chair on the right side will be adjusted up by 15 cm. The communication perspectives of the two users tend to be parallel.
In addition, the scheme for adjusting the height of the chair provided in fig. 2a-2c can be applied to the action scene of chat, and can also be applied to the video conference, by adjusting the height of the chair, the heights of the heads of the two users tend to be consistent in the acquisition range of the video conference acquisition device, and the height difference between the two users can be weakened in the video conference picture.
Fig. 1b is a schematic structural diagram of an intelligent facility according to an exemplary embodiment of the present application. As shown in fig. 1b, the smart facility may include a facility body 3, a processor 1, and a drive assembly 2;
the processor 1 is used for determining the sign difference between users in the behavior scene containing the users carried by the intelligent facility; determining an adjusting parameter value according to the sign difference;
and adjusting the working state of the intelligent facility by using the driving component 2 according to the adjustment parameter value so as to enable the behavior state of the user borne by the intelligent facility to be matched with the behavior states of other users in the behavior scene.
The intelligent settings in this embodiment correspond to the behavior vectors in the control system shown in fig. 1a, among others. The present embodiment differs from the control system shown in fig. 1a in that the functionality of the controllers in the control system is integrated into the smart facility. Accordingly, in this embodiment, the intelligent facility can autonomously determine the sign difference and adjust the working state.
In this embodiment, the processor 1 may communicate with the detection component 4 corresponding to the behavior scene, and obtain sign information of each user in the behavior scene by using the detection component 4; and determining the sign difference between the at least two users according to the respective sign information of the at least two users.
In order to achieve relative adjustment of the working state between different smart appliances in the same behavior scenario, in this embodiment, the processors in different smart appliances may adopt the same processing rule. For a single intelligent facility, the physical sign information of each user in the behavior scene can be comprehensively considered, a relative adjustment scheme of each behavior carrier in the behavior scene is designed, and then the adjustment parameter value of the single intelligent facility is determined, and the determined adjustment parameter value is adapted to the adjustment parameter values of other intelligent facilities in the behavior scene.
In addition, other technical details related to the present embodiment may refer to the related descriptions in the embodiments of the carrier control system, and are not described herein for brevity, which should not cause a loss of the scope of the present application.
Fig. 3a is a schematic structural diagram of another control system according to another exemplary embodiment of the present application. As shown in fig. 3a, the control system includes a controller 30 and a target facility 40.
The control system provided by the embodiment can be applied to independent small space scenes, such as: vocal bars, leisure bars, conference bars, dining bars, etc. Of course, the method can also be applied to space scenes with other specifications, such as meeting rooms, KTV boxes, meeting rooms, restaurants and the like. The present embodiment does not limit the application scenario.
The control system provided by the embodiment mainly aims at the condition that at least two users share the same target facility 40, and provides a solution for enabling the working body of the target facility 40 to be adapted to the physical signs of at least two users.
For the controller 30, a sign difference between at least two users using the same target facility 40 can be determined. The physical signs of the user include, but are not limited to, height, weight, sex, and the like. The sign difference can be understood as the difference of different users under the same sign.
In this embodiment, the controller 30 may determine an adjustment parameter value corresponding to the target facility 40 according to the sign difference between at least two users; and adjusting the working state of the target facility 40 according to the adjustment parameter value so as to adapt to the physical signs of at least two users.
The target facility 40 may be one or more of a camera, a display screen, a table, or a microphone, among others. These are merely exemplary, the present embodiment is not limited to the type of the target facility 40, and the target facility 40 may be any facility shared by a plurality of users.
In practice, the difference in physical signs between users may cause users to feel unequally or not respected in using the same target facility 40, especially when the environmental space is small, and the behavior distance between users is close, which is more intense.
In this embodiment, the controller 30 can adjust the working state of the target facility 40 according to the sign difference between different users, so that the working state of the target facility 40 is adapted to the signs of different users, and further, the inequality feeling or the unsure feeling caused by the sign difference between different users is improved.
For example, when two users sing on the same stand of microphone, the difference in height causes the two users to feel different in using the microphone, and the lower user may feel embarrassed about the microphone, while the higher user may feel uncomfortable about the user who needs to bend down. The controller 30 can adjust the height of the microphone to a height that is more suitable for both, thereby improving the embarrassment or discomfort described above.
The operational state of the target facility 40 is adapted to the physical signs of at least two users, which may be understood as adjusting the target facility 40 to an operational state that balances the negative feelings experienced by different users using the same target facility 40. In practical applications, the negative feelings experienced by different users using the same target facility 40 may not completely disappear due to limitations such as adjustment limits of the target facility 40 and ergonomic requirements, and the objective of the present embodiment is not to force the complete disappearance of the negative feelings, but to balance the negative feelings experienced by different users using the same target facility 40 as much as possible while satisfying the above limitations.
In this embodiment, a difference in signs between at least two users using the same target facility 40 can be determined; determining an adjustment parameter value corresponding to the target facility 40 according to the sign difference between at least two users; and adjusting the working state of the target facility 40 according to the adjustment parameter value so as to adapt to the physical signs of at least two users. Accordingly, the negative feelings experienced by different users using the same target facility 40 can be effectively balanced.
In the above or below embodiments, the controller 30 may capture images of at least two users using the image capture device; extracting respective sign information of at least two users from the images of the at least two users; and determining the sign difference between the at least two users according to the respective sign information of the at least two users.
In this embodiment, the control system further includes an image capturing device. The image acquisition device acquires images of at least two users as needed.
In one implementation, the image capture device may be deployed at an entrance to the environmental space in which the target facility 40 is located. Wherein the entrance is a must for at least two users, and therefore, the image capturing device disposed at the entrance can successfully capture the images of the at least two users.
In this implementation, the image acquisition device can acquire the image containing a single user at a single time, and certainly, if a plurality of users pass through the entrance simultaneously, the image acquisition device can acquire the image containing a plurality of users at a single time. And is not limited herein.
In practical application, a reference object may be set in the acquisition range of the image acquisition device, the images of at least two users acquired by the image acquisition device include the reference object, and the controller 30 may perform image analysis on the images acquired by the image acquisition device, and determine the sign information of each user according to the reference object.
For example, a scale may be provided at the entrance, an image including the user and the scale may be captured by the image capturing device when the user passes through the entrance, and the controller 30 may analyze which scale of the scale the crown of the user is flush with, thereby determining height information of the user.
Of course, different references may be deployed for different signs, and references are not necessary, and some signs may not require a reference, e.g., gender, etc.
In another implementation, the image capture device may be deployed in a location where images containing at least two users can be captured. For example, in a vocal bar, it may be deployed in front of a plurality of seats.
The image capturing apparatus may capture an image containing at least two users with the at least two users in place to respective facility use locations; wherein, the facility use position refers to the position of the user in the environment space when the user uses the target implementation.
Also for example, at least two users may be seated separately and multiple seats may be at the same initial height. The image capturing device may capture images containing the seated users.
In this implementation, the reference object may also be deployed within the acquisition range of the image acquisition device, so that the image acquired by the image acquisition device includes at least two users and the reference object. Controller 30 may determine the sign information of each user based on the reference object, and further analyze the sign difference between different users. Of course, the controller 30 can also directly compare the signs between different users in the image to determine the sign difference between different users.
In this implementation, since each user is already in place at the facility use location, the determined sign difference will more accurately reflect the difference in the working status of different users using the same target facility 40. This may provide a more accurate basis for controller 30 to determine the adjustment parameter values.
Of course, in this embodiment, other implementation manners may also be adopted to determine the sign difference between different users, for example, the infrared sensing device may be adopted to collect sign information of height or weight of the user, and different sign information may be collected by using different devices. The way of determining the sign difference between different users in the present embodiment is by no means limited to this.
In the above or below embodiments, before extracting the respective sign information of the at least two users from the images of the at least two users, the controller 30 may further perform identification on the at least two users to determine whether there is a registered user among the at least two users; aiming at a registered user, extracting sign information from user information corresponding to the registered user; and aiming at the non-registered users, performing operation of extracting respective physical sign information of at least two users from the images of the at least two users.
In this embodiment, a pre-registration scheme may be provided. The pre-registration scheme is particularly suitable for a behavior scene with a relatively fixed user group. Such as conference bars, conference rooms, etc. within an enterprise, the user groups are employees within the enterprise. Each user can register before using the target facility 40 and submit user information such as own identity information and physical sign information. The identity information may be a face image, fingerprint data, and the like.
Accordingly, in the present embodiment, the controller 30 may identify at least two users to determine a registered user of the at least two users. The control system may be disposed with an identification device, such as a face recognition device, a fingerprint acquisition device, and the like, and the controller 30 may identify the user by using the identification device, so as to determine the registered user of the at least two users.
For the registered user, the sign information can be extracted from the user information of the registered user. In this way, the controller 30 may not need to perform the operation of extracting the vital sign information of the at least two users from the images of the at least two users as mentioned in the foregoing embodiment.
For the non-registered user, the controller 30 may perform the operation of extracting the sign information of the at least two users from the images of the at least two users as mentioned in the foregoing embodiment to obtain the sign information of the at least two users.
Of course, the identification process mentioned in this embodiment is not necessary. In the case where the user group is not fixed, on one hand, the user group is too large, and on the other hand, the number of times that the user uses the target facility 40 is very low, and considering these factors, the process of identification provided in the present embodiment may not be performed any more.
In the above or below embodiments, the types of tuning parameters that may be adjusted by different types of target facilities 40 may be the same or different. Types of tuning parameters include, but are not limited to, height, stiffness, inclination, and the like. For example, the type of adjustment parameter corresponding to the microphone may include height, the type of adjustment parameter corresponding to the sofa may include hardness, and the type of adjustment parameter corresponding to the table may also include inclination.
Therefore, to save processing resources, the controller 30 may focus on different dimensional differences in signs for different types of target facilities 40, so as to determine corresponding adjustment parameter values for the target facilities 40. For example, for a microphone, the difference in height between different users may be of significant concern, while for a sofa, the difference in weight between different users may be of significant concern. Of course, this embodiment is not limited thereto.
In this embodiment, based on the types of adjustment parameters that can be adjusted by different types of target facilities 40, in an exemplary implementation, the controller 30 can obtain respective sign information of at least two users, and determine ergonomic requirements corresponding to the at least two users according to the sign information; respectively determining the numerical ranges of the adjustment parameters meeting the ergonomic requirements of at least two users; respectively selecting target adjustment parameter values from the numerical ranges of the adjustment parameters corresponding to at least two users by taking the physical signs adapted to the at least two users as targets; and determining the corresponding adjusting parameter value of the target facility 40 according to the corresponding target adjusting parameter value of at least two users.
In this embodiment, a range of values of the adjustment parameter that meets the ergonomic requirements is set for each user for the target facility 40. The controller 30 selects an appropriate value within the range of values of the adjustment parameter corresponding to each user, with the objective of adapting to the physical signs of at least two users. For example, the median value among all the value ranges is determined, and the value closest to the median value among the value ranges is selected as the target adjustment parameter value.
Based on the above, the controller 30 may determine an adjustment parameter value corresponding to the target facility 40 according to a plurality of target adjustment parameter values. For example, the controller 30 may select a median value, a minimum value, or may calculate an average value, etc., from a plurality of target adjustment parameter values, and is not limited herein.
According to the implementation manner, on the premise that the adjusted working state of the target facility 40 is in accordance with the human engineering, the difference of the use feeling caused by the physical sign difference between at least two users can be reduced to the maximum extent. In this implementation, differences in the usage experience of different users for the same target facility 40 may be balanced on the basis of ergonomic considerations.
Of course, in this embodiment, other implementations may also be used to determine the adjustment parameter value corresponding to the target facility 40. Moreover, the processing logic employed by the controller 30 in determining the adjustment parameter values corresponding to the target facilities 40 can be flexibly adjusted according to factors such as the types of different target facilities 40 and the types of the concerned physical signs. This embodiment is not limited to this.
In the above or below described embodiments, the target facility 40 may be associated with a drive assembly. The controller 30 may generate a control command according to the adjustment parameter value corresponding to the target facility 40; and sending a control command to the driving component associated with the target facility 40 to control the driving component to adjust the working state of the target facility 40 according to the corresponding adjustment parameter value of the target facility 40.
The components included in the drive assembly may not be identical for different types of target facilities 40, and for different types of tuning parameters.
For example, in the case where it is desired to adjust the height of the target facility 40, the drive assembly may include a PLC (programmable logic controller 30) and a cylinder, the cylinder may be affixed to the target facility 40, and the PLC may be communicatively connected to the controller 30. The PLC may respond to the control command sent by the controller 30, and drive the cylinder to move according to the adjustment parameter value in the control command, so as to drive the target facility 40 to adjust the height.
Of course, this is merely exemplary, and in this embodiment, the implementation manner of the driving assembly is not limited. The driving assembly can be deployed according to actual requirements.
Fig. 4a to 4c are schematic diagrams of another application scenario provided in another exemplary embodiment of the present application. In fig. 4 a-4 c, two users of different heights sing using the same tower.
As shown in fig. 4a, an image capturing device is disposed at an entrance of the scene space, and images of two users can be captured and provided to the controller when the two users enter the scene space. The controller may determine a height difference between the two users based on the images of the two users.
As shown in fig. 4b, in the preliminary stage of singing, two users are positioned to the position of singing, respectively. The microphone is at an initial height. The initial height is 170cm, for shorter users, the microphone is not available due to height limitations.
As shown in fig. 4c, the controller may determine the adjustment parameter value of the microphone with the aim of adapting the height of the two users based on the height difference between the two users. For example, the difference in height between two users is 30cm, the shorter user is 150cm, the taller user is 180cm, and the controller may adjust the height of the microphone to 160 cm. The controller may issue a control command to a drive assembly associated with the microphone and carry the adjustment parameter value in the control command. The driving component adjusts the height of the microphone according to the control command.
Before the height of the microphone is adjusted by the driving component, adjustment prompt information can be sent to a user, for example, prompt words or prompt voice can be sent out through a display screen or an audio device in a scene space, so that the user is prompted to be ready for mind.
As shown in fig. 4c, the height of the microphone is adjusted to 160 cm. The use experience of both users is improved.
Fig. 3b is a schematic structural diagram of an intelligent facility according to another exemplary embodiment of the present application. As shown in fig. 3b, the smart facility may include a facility body 7, a processor 5, and a drive assembly 6;
the processor 5 is configured to determine a difference in signs between at least two users using the smart facility; determining an adjusting parameter value according to the sign difference;
and adjusting the working state of the intelligent facility by using the driving component 6 according to the adjustment parameter value so as to adapt to the physical signs of at least two users.
Wherein the intelligent settings in this embodiment correspond to the target facilities in the control system shown in fig. 3 a. The difference between this embodiment and the control system shown in fig. 3a is that the functionality of the controllers in the control system is integrated into the smart facility. Accordingly, in this embodiment, the intelligent facility can autonomously determine the sign difference and adjust the working state.
In this embodiment, the processor 5 may communicate with the detection component 8 corresponding to the space where the intelligent facility is located, and obtain respective sign information of at least two users using the intelligent facility by using the detection component 8; and determining the sign difference between the at least two users according to the respective sign information of the at least two users. And further determining an adjusting parameter value according to the sign difference.
In addition, other technical details related to the present embodiment may refer to the related descriptions in the embodiments of the control system, and for brevity, are not described herein again, but this should not cause a loss of the scope of the present application.
Fig. 5 is a schematic structural diagram of another control system according to another exemplary embodiment of the present application. As shown in fig. 5, the control system includes a controller 50 and at least one facility 60.
The control system provided by the embodiment mainly aims at the situation that the user enters the target space and uses at least one facility 60 in the target space, and provides a solution for adapting the working state of at least one facility 60 to the physical sign of the user. In this embodiment, the target space may be a restaurant, a conference room, a chat room, an entertainment room, or the like, and the specification of the target space is not limited in this embodiment. The designation of the target space may be different in different specifications, for example, in a separate small space specification, the target space may also be referred to as: vocal bars, leisure bars, conference bars, dining bars, etc.
The facility 60 in the target space may be a chair, a sofa, a camera, a display screen, a desk, a microphone, etc., which are only exemplary, the present embodiment is not limited to the type of the facility 60, and the facility 60 may be any facility that may be used by a user in the target space.
It is possible for the controller 50 to acquire the physical sign information of the user who enters the target space. The sign information includes, but is not limited to, height, weight or gender, etc.
Determining an adjustment parameter value of at least one facility 60 in the target space according to the physical sign information of the user; and respectively adjusting the working state of at least one facility 60 according to the adjustment parameter value of at least one facility 60 so as to adapt to the physical sign information of the user.
In this embodiment, the controller 50 can adjust the at least one facility 60 to the working state adapted to the physical sign of the user by using the physical sign information of the user as an adjustment basis for adjusting the at least one facility 60.
Wherein the types of tuning parameters corresponding to different facilities 60 may not be identical. Types of tuning parameters include, but are not limited to, height, stiffness, inclination, and the like.
In the embodiment, the sign information of the user entering the target space can be acquired; determining an adjustment parameter value of at least one facility 60 in the target space according to the physical sign information of the user; and respectively adjusting the working state of at least one facility 60 according to the adjustment parameter value of at least one facility 60 so as to adapt to the physical sign information of the user. Accordingly, the facility 60 use feeling of the user in the target space can be effectively improved.
In the above or below embodiments, the controller 50 may capture an image of the user using an image capture device; and extracting the sign information of the user from the image of the user.
In this embodiment, the control system further includes an image capturing device. The image acquisition device acquires images of at least two users as needed.
In one implementation, the image capture device may be deployed at an entrance to a target space. The entrance is a necessary place for the user, so that the image acquisition device arranged at the entrance can successfully acquire the image of the user.
In practical applications, a reference object may be set in the acquisition range of the image acquisition device, the image of the user acquired by the image acquisition device includes the reference object, and the controller 50 may perform image analysis on the image acquired by the image acquisition device and determine the physical sign information of the user according to the reference object.
For example, a scale may be provided at the entrance, an image including the user and the scale may be captured by the image capturing device when the user passes through the entrance, and the controller 50 may analyze which scale of the scale the crown of the user is flush with, thereby determining the height information of the user.
Of course, different references may be deployed for different signs, and references are not necessary, and some signs may not require a reference, e.g., gender, etc.
Of course, in this embodiment, other implementation manners may also be adopted to determine the sign information of the user, for example, the infrared sensing device may be adopted to collect the sign information of the user, such as height or weight, and different sign information may be collected by using different devices. The way of determining the sign difference of the user in this embodiment is by no means limited to this.
In the above or below embodiments, before extracting the sign information of the user from the image of the user, the controller 50 may also perform identification on the user to determine whether the user is a registered user; if the user is a registered user, extracting sign information from user information corresponding to the registered user; and if the user is a non-registered user, executing the operation of extracting the physical sign information of the user from the image of the user.
In this embodiment, a pre-registration scheme may be provided. The pre-registration scheme is particularly suitable for a behavior scene with a relatively fixed user group. Such as conference bars, conference rooms, etc. within an enterprise, the user groups are employees within the enterprise. The user may register before using the target facility 60 and submit user information such as his or her identity information and sign information. The identity information may be a face image, fingerprint data, and the like.
Accordingly, in this embodiment, the controller 50 may identify the user to determine whether the user is a registered user. The control system may be disposed with an identification device, such as a face recognition device, a fingerprint collection device, and the like, and the controller 50 may identify the user by using the identification device, so as to determine whether the user is a registered user.
For the registered user, the sign information can be extracted from the user information of the registered user. In this way, the controller 50 may not need to perform the operation of extracting the physical sign information of the user from the image of the user as mentioned in the foregoing embodiment.
For the non-registered user, the controller 50 may perform the operation of extracting the physical sign information of the user from the image of the user as mentioned in the foregoing embodiment to obtain the physical sign information of the user.
Of course, the identification process mentioned in this embodiment is not necessary. For the case that the user group is not fixed, on one hand, the size of the user group is too large, and on the other hand, the number of times that the user enters the target space is very low, and in consideration of these factors, the process of identification provided in this embodiment may not be executed any more.
In the above or below embodiments, the types of tuning parameters that may be adjusted for different types of facilities 60 may be the same or different. Types of tuning parameters include, but are not limited to, height, stiffness, inclination, and the like. For example, the type of adjustment parameter corresponding to the microphone may include height, the type of adjustment parameter corresponding to the sofa may include hardness, and the type of adjustment parameter corresponding to the table may also include inclination.
Thus, to conserve processing resources, the controller 50 may focus on different dimensional variances in signs for different types of facilities 60 to determine the adjustment parameter values corresponding to each facility 60. For example, for a microphone, the user's height information may be focused on, while for a sofa, the user's weight information may be focused on. Of course, this embodiment is not limited thereto.
In this embodiment, based on the types of adjustment parameters that can be adjusted by different types of facilities 60, in an exemplary implementation, the controller 50 can respectively determine the ergonomic requirement of the user under at least one type of facility 60 according to the sign information of the user; according to the ergonomic requirement of each of the at least one facility 60, the adjustment parameter value corresponding to each of the at least one facility 60 is determined.
In this implementation, the ergonomic requirements required by the user are not exactly the same for different facilities 60. The controller 50 can adjust the at least one facility 60 to an operating state that meets ergonomic requirements of the user.
Of course, other implementations may also be used to determine the corresponding tuning parameter value of the target facility 60 in this embodiment. Moreover, the processing logic employed by the controller 50 in determining the adjustment parameter values corresponding to each facility 60 can be flexibly adjusted based on factors such as the type of facility 60, the type of physical sign of interest, and the like. This embodiment is not limited to this.
In the above or below embodiments, at least one facility 60 may each be associated with a drive assembly. The controller 50 may generate a control command corresponding to each of the at least one facility 60 according to the adjusted parameter value of the at least one facility 60; the control command corresponding to each of the at least one facility 60 is sent to the associated drive assembly of each of the at least one facility 60 to control the associated drive assembly of each of the at least one facility 60 to adjust the operating condition of the at least one facility 60 according to the adjusted parameter value of the at least one facility 60.
The components included in the drive assembly may not be identical for different types of facilities 60, and for different types of tuning parameters.
For example, in the event that the height of the facility 60 needs to be adjusted, the drive assembly may include a PLC (programmable logic controller 50) and a pneumatic cylinder, the pneumatic cylinder may be secured to the facility 60, and the PLC may be communicatively coupled to the controller 50. The PLC can respond to the control command sent by the controller 50, and drive the cylinder to move according to the adjustment parameter value in the control command, so as to drive the facility 60 to adjust the height.
Of course, this is merely exemplary, and in this embodiment, the implementation manner of the driving assembly is not limited. The driving assembly can be deployed according to actual requirements.
Fig. 6 is a flowchart illustrating a carrier control method according to another exemplary embodiment of the present application.
As shown in fig. 6, the method includes:
step 600, determining the sign difference between at least two users in the same behavior scene;
601, respectively determining adjustment parameter values corresponding to behavior carriers used by at least two users according to the sign difference;
step 602, adjusting the working state of each behavior carrier according to the adjustment parameter value, so that the behavior states of at least two users are mutually adaptive;
wherein, the working state of the behavior carrier influences the behavior state of the user carried by the behavior carrier.
In an optional embodiment, the step of determining the sign difference between at least two users in the same behavior scenario comprises:
acquiring images of at least two users by using image acquisition equipment;
extracting respective sign information of at least two users from the images of the at least two users;
and determining the sign difference between the at least two users according to the respective sign information of the at least two users.
In an alternative embodiment, the step of capturing images of at least two users with an image capture device comprises:
and respectively acquiring images of at least two users at the entrance of the environment space corresponding to the action scene by utilizing the image acquisition equipment.
In an alternative embodiment, the step of capturing images of at least two users with an image capture device comprises:
in the initial stage that at least two users are loaded on the corresponding behavior carriers, acquiring images containing at least two users by using image acquisition equipment;
in the initial stage that at least two users are loaded on the corresponding behavior carriers, the behavior carriers are in the initial working state.
In an optional embodiment, before the step of extracting the respective sign information of the at least two users from the images of the at least two users, the method further includes:
performing identity recognition on at least two users to determine whether a registered user exists in the at least two users;
aiming at a registered user, extracting sign information from user information corresponding to the registered user;
and aiming at the non-registered users, performing operation of extracting respective physical sign information of at least two users from the images of the at least two users.
In an optional embodiment, before determining, according to the sign difference, the adjustment parameter values corresponding to the behavior carriers used by the at least two users, respectively, the method further includes:
in a preparation stage of the behavior scene, determining relative positions between at least two users and each behavior carrier;
determining behavior carriers selected by the at least two users according to the relative positions of the at least two users and the behavior carriers;
wherein, in the preparation stage, at least two users are respectively in position to the selected behavior carrier.
In an optional embodiment, the step of determining, according to the difference of the physical signs, adjustment parameter values corresponding to behavior carriers used by at least two users respectively includes:
acquiring respective sign information of at least two users, and determining respective ergonomic requirements of the at least two users according to the sign information;
respectively determining the numerical value ranges of the adjustment parameters meeting the ergonomic requirements of the users aiming at the behavior carriers used by the at least two users respectively;
and respectively selecting a target adjustment parameter value from the numerical range of the adjustment parameters corresponding to the behavior carriers used by the at least two users as the adjustment parameter value corresponding to the behavior carrier used by the at least two users, so that the behavior states of the at least two users are mutually adapted as a target.
In an alternative embodiment, each behavioral carrier is associated with a driving component, and the step of adjusting the operating state of each behavioral carrier according to the adjustment parameter value includes:
respectively generating control commands corresponding to the behavior carriers according to the adjustment parameter values corresponding to the behavior carriers used by at least two users;
and respectively sending the control commands corresponding to the behavior carriers to the driving components associated with the behavior carriers so as to control the driving components to adjust the working states of the behavior carriers according to the adjustment parameter values corresponding to the behavior carriers.
In an alternative embodiment, the activity carrier includes one or more of a chair, a table, a sofa, or a lift table.
In an alternative embodiment, the type of adjustment parameter includes one or more of height, inclination or stiffness.
In an alternative embodiment, the physical signs include one or more of height, weight, or gender.
In an alternative embodiment, the behavioral scenario includes one or more of a chat scenario, a dining scenario, a meeting scenario, or an entertainment scenario.
It should be noted that the technical details related to the embodiments of the control method described above can be referred to the related description related to the controller in the related embodiment of the control system shown in fig. 1a, and for the sake of brevity, the detailed description is not repeated here, but this should not cause the loss of the protection scope of the present application.
Fig. 7 is a schematic structural diagram of a computing device according to another exemplary embodiment of the present application. As shown in fig. 7, the computing device includes a memory 70 and a processor 71.
A processor 71, coupled to the memory 70, for executing computer programs in the memory 70 for:
determining a sign difference between at least two users in the same behavior scene;
respectively determining adjustment parameter values corresponding to behavior carriers used by at least two users according to the sign difference;
adjusting the working states of the behavior carriers according to the adjustment parameter values so as to enable the behavior states of at least two users to be mutually adaptive;
wherein, the working state of the behavior carrier influences the behavior state of the user carried by the behavior carrier.
In an alternative embodiment, the processor 71, when determining the sign difference between at least two users in the same behavior scenario, is configured to:
acquiring images of at least two users by using image acquisition equipment;
extracting respective sign information of at least two users from the images of the at least two users;
and determining the sign difference between the at least two users according to the respective sign information of the at least two users.
In an alternative embodiment, the processor 71, when capturing images of at least two users with the image capturing device, is configured to:
and respectively acquiring images of at least two users at the entrance of the environment space corresponding to the action scene by utilizing the image acquisition equipment.
In an alternative embodiment, the processor 71, when capturing images of at least two users with the image capturing device, is configured to:
in the initial stage that at least two users are loaded on the corresponding behavior carriers, acquiring images containing at least two users by using image acquisition equipment;
in the initial stage that at least two users are loaded on the corresponding behavior carriers, the behavior carriers are in the initial working state.
In an optional embodiment, before extracting the respective sign information of the at least two users from the images of the at least two users, the processor 71 further includes:
performing identity recognition on at least two users to determine whether a registered user exists in the at least two users;
aiming at a registered user, extracting sign information from user information corresponding to the registered user;
and aiming at the non-registered users, performing operation of extracting respective physical sign information of at least two users from the images of the at least two users.
In an optional embodiment, before determining, according to the sign difference, the adjustment parameter values corresponding to the behavior carriers used by the at least two users respectively, the processor 71 is further configured to:
in a preparation stage of the behavior scene, determining relative positions between at least two users and each behavior carrier;
determining behavior carriers selected by the at least two users according to the relative positions of the at least two users and the behavior carriers;
wherein, in the preparation stage, at least two users are respectively in position to the selected behavior carrier.
In an optional embodiment, when determining, according to the sign difference, the adjustment parameter values corresponding to the behavior carriers used by the at least two users, respectively, the processor 71 is configured to:
acquiring respective sign information of at least two users, and determining respective ergonomic requirements of the at least two users according to the sign information;
respectively determining the numerical value ranges of the adjustment parameters meeting the ergonomic requirements of the users aiming at the behavior carriers used by the at least two users respectively;
and respectively selecting a target adjustment parameter value from the numerical range of the adjustment parameters corresponding to the behavior carriers used by the at least two users as the adjustment parameter value corresponding to the behavior carrier used by the at least two users, so that the behavior states of the at least two users are mutually adapted as a target.
In an alternative embodiment, each behavior carrier is associated with a driving component, and the processor 71, when adjusting the operating state of each behavior carrier according to the adjustment parameter value, is configured to:
respectively generating control commands corresponding to the behavior carriers according to the adjustment parameter values corresponding to the behavior carriers used by at least two users;
and respectively sending the control commands corresponding to the behavior carriers to the driving components associated with the behavior carriers so as to control the driving components to adjust the working states of the behavior carriers according to the adjustment parameter values corresponding to the behavior carriers.
In an alternative embodiment, the activity carrier includes one or more of a chair, a table, a sofa, or a lift table.
In an alternative embodiment, the type of adjustment parameter includes one or more of height, inclination or stiffness.
In an alternative embodiment, the physical signs include one or more of height, weight, or gender.
In an alternative embodiment, the behavioral scenario includes one or more of a chat scenario, a dining scenario, a meeting scenario, or an entertainment scenario.
It should be noted that the technical details related to the embodiments of the computing device described above can be referred to the related description related to the controller in the related embodiment of the control system shown in fig. 1a, and for the sake of brevity, the detailed description is not repeated here, but this should not cause the loss of the protection scope of the present application.
Further, as shown in fig. 7, the computing device further includes: communication components 72, power components 73, and the like. Only some of the components are schematically shown in fig. 7, and the computing device is not meant to include only the components shown in fig. 7.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program can implement the steps that can be executed by a computing device in the foregoing method embodiments when executed.
Fig. 8 is a schematic flow chart of another facility control method according to another exemplary embodiment of the present application.
As shown in fig. 8, the method includes:
step 800, determining the sign difference between at least two users using the same target facility;
step 801, determining an adjustment parameter value corresponding to a target facility according to the sign difference between at least two users;
and step 802, adjusting the working state of the target facility according to the adjustment parameter value so as to adapt to the physical signs of at least two users.
In an alternative embodiment, the step of determining a difference in signs between at least two users using the same target facility comprises:
acquiring images of at least two users by using image acquisition equipment;
extracting respective sign information of at least two users from the images of the at least two users;
and determining the sign difference between the at least two users according to the respective sign information of the at least two users.
In an alternative embodiment, the step of capturing images of at least two users with an image capture device comprises:
and respectively acquiring images of at least two users at the entrance of the environment space where the target equipment is positioned by utilizing the image acquisition equipment.
In an alternative embodiment, the step of capturing images of at least two users with an image capture device comprises:
with at least two users in place to respective facility use locations, an image containing the at least two users is captured with an image capture device.
In an optional embodiment, before the step of extracting the respective sign information of the at least two users from the images of the at least two users, the method further includes:
performing identity recognition on at least two users to determine whether a registered user exists in the at least two users;
aiming at a registered user, extracting sign information from user information corresponding to the registered user;
and aiming at the non-registered users, performing operation of extracting respective physical sign information of at least two users from the images of the at least two users.
In an optional embodiment, the step of determining the adjustment parameter value corresponding to the target facility according to the sign difference between at least two users includes:
acquiring respective sign information of at least two users, and determining respective ergonomic requirements of the at least two users according to the sign information;
respectively determining the numerical ranges of the adjustment parameters meeting the ergonomic requirements of at least two users;
respectively selecting target adjustment parameter values from the numerical ranges of the adjustment parameters corresponding to at least two users by taking the physical signs adapted to the at least two users as targets;
and determining an adjusting parameter value corresponding to the target facility according to the target adjusting parameter value corresponding to each of the at least two users.
In an alternative embodiment, the target facility is associated with a drive assembly, and the step of adjusting the operating state of the target facility according to the adjustment parameter value comprises:
generating a control command according to the corresponding adjustment parameter value of the target facility;
and sending the control command to a driving component associated with the target facility so as to control the driving component to adjust the working state of the target facility according to the adjustment parameter value corresponding to the target facility.
In an alternative embodiment, the target facility includes one or more of a camera, a display screen, a table, or a microphone.
In an alternative embodiment, the type of adjustment parameter includes one or more of height, inclination or stiffness.
In an alternative embodiment, the physical signs include one or more of height, weight, or gender.
It should be noted that the technical details related to the embodiments of the control method described above can be referred to the related description related to the controller in the related embodiment of the control system shown in fig. 3a, and for the sake of brevity, the detailed description is not repeated here, but this should not cause the loss of the protection scope of the present application.
Fig. 9 is a schematic structural diagram of another computing device according to yet another exemplary embodiment of the present application. As shown in fig. 9, the computing device includes: a memory 90 and a processor 91.
A processor 91, coupled to the memory 90, for executing the computer program in the memory 90 for:
determining a sign difference between at least two users using the same target facility;
determining an adjustment parameter value corresponding to a target facility according to the sign difference between at least two users;
and adjusting the working state of the target facility according to the adjustment parameter value so as to adapt to the physical signs of at least two users.
In an alternative embodiment, the processor 91, when determining a sign difference between at least two users using the same target facility, is configured to:
acquiring images of at least two users by using image acquisition equipment;
extracting respective sign information of at least two users from the images of the at least two users;
and determining the sign difference between the at least two users according to the respective sign information of the at least two users.
In an alternative embodiment, the processor 91, when capturing images of at least two users with the image capturing device, is configured to:
and respectively acquiring images of at least two users at the entrance of the environment space where the target equipment is positioned by utilizing the image acquisition equipment.
In an alternative embodiment, the processor 91, when capturing images of at least two users with the image capturing device, is configured to:
with at least two users in place to respective facility use locations, an image containing the at least two users is captured with an image capture device.
In an optional embodiment, the processor 91, before extracting the respective sign information of the at least two users from the images of the at least two users, is further configured to:
performing identity recognition on at least two users to determine whether a registered user exists in the at least two users;
aiming at a registered user, extracting sign information from user information corresponding to the registered user;
and aiming at the non-registered users, performing operation of extracting respective physical sign information of at least two users from the images of the at least two users.
In an optional embodiment, the processor 91, when determining the adjustment parameter value corresponding to the target facility according to the sign difference between at least two users, is configured to:
acquiring respective sign information of at least two users, and determining respective ergonomic requirements of the at least two users according to the sign information;
respectively determining the numerical ranges of the adjustment parameters meeting the ergonomic requirements of at least two users;
respectively selecting target adjustment parameter values from the numerical ranges of the adjustment parameters corresponding to at least two users by taking the physical signs adapted to the at least two users as targets;
and determining an adjusting parameter value corresponding to the target facility according to the target adjusting parameter value corresponding to each of the at least two users.
In an alternative embodiment, the target facility is associated with a drive assembly, and the processor 91, when adjusting the operating state of the target facility according to the adjustment parameter value, is configured to:
generating a control command according to the corresponding adjustment parameter value of the target facility;
and sending the control command to a driving component associated with the target facility so as to control the driving component to adjust the working state of the target facility according to the adjustment parameter value corresponding to the target facility.
In an alternative embodiment, the target facility includes one or more of a camera, a display screen, a table, or a microphone.
In an alternative embodiment, the type of adjustment parameter includes one or more of height, inclination or stiffness.
In an alternative embodiment, the physical signs include one or more of height, weight, or gender.
It should be noted that the technical details related to the embodiments of the computing device described above can be referred to the related description related to the controller in the related embodiment of the control system shown in fig. 3a, and for the sake of brevity, the detailed description is not repeated here, but this should not cause the loss of the protection scope of the present application.
Further, as shown in fig. 9, the computing device further includes: communication components 92, power components 93, and the like. Only some of the components are schematically shown in fig. 9, and the computing device is not meant to include only the components shown in fig. 9.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program can implement the steps that can be executed by a computing device in the foregoing method embodiments when executed.
Fig. 10 is a flowchart illustrating a further facility control method according to another exemplary embodiment of the present application.
As shown in fig. 10, the method includes:
step 100, acquiring sign information of a user entering a target space;
step 101, determining an adjustment parameter value of at least one facility in a target space according to sign information of a user;
and 102, respectively adjusting the working state of at least one facility according to the adjustment parameter value of at least one facility so as to adapt to the physical sign information of the user.
In an optional embodiment, the step of obtaining the sign information of the user entering the target space includes:
acquiring an image of a user by using image acquisition equipment;
and extracting the sign information of the user from the image of the user.
In an alternative embodiment, the step of capturing an image of the user with an image capture device comprises:
an image of a user is acquired at an entrance of a target space using an image acquisition device.
In an optional embodiment, before the step of extracting the sign information of the user from the image of the user, the method further includes:
identifying the identity of the user to determine whether the user is a registered user;
if the user is a registered user, extracting sign information from user information corresponding to the registered user;
and if the user is a non-registered user, executing the operation of extracting the physical sign information of the user from the image of the user.
In an optional embodiment, the step of determining the adjustment parameter value of at least one facility in the target space according to the physical sign information of the user includes:
respectively determining the ergonomic requirements of the user under at least one facility according to the physical sign information of the user;
and respectively determining the adjustment parameter value corresponding to each facility according to the ergonomic requirement corresponding to each facility.
In an alternative embodiment, each of the at least one facility has a drive assembly associated therewith, and the step of adjusting the operating state of each of the at least one facility according to the adjustment parameter value of the at least one facility includes:
respectively generating control commands corresponding to at least one facility according to the adjustment parameter values of the at least one facility;
and respectively sending a control command corresponding to each of the at least one facility to the drive assembly associated with each of the at least one facility so as to control the drive assembly associated with each of the at least one facility to adjust the working state of the at least one facility according to the adjustment parameter value of the at least one facility.
In an alternative embodiment, the facility is a chair, a sofa, a camera, a display screen, a table, or a microphone.
In an alternative embodiment, the type of adjustment parameter includes one or more of height, inclination or stiffness.
In an alternative embodiment, the vital sign information includes one or more of height, weight, or gender information.
In an alternative embodiment, the target space is a restaurant, a conference room, a chat room, or an entertainment room.
It should be noted that the technical details related to the embodiments of the control method described above can be referred to the related description related to the controller in the related embodiment of the control system shown in fig. 5, and for the sake of brevity, the details are not described herein again, but this should not cause the loss of the scope of protection of the present application.
Fig. 11 is a schematic structural diagram of another computing device according to another exemplary embodiment of the present application. As shown in fig. 11, the computing device includes a memory 110 and a processor 111.
A processor 111, coupled to the memory 110, for executing the computer program in the memory 110 to:
acquiring sign information of a user entering a target space;
determining an adjustment parameter value of at least one facility in a target space according to the physical sign information of a user;
and respectively adjusting the working state of at least one facility according to the adjustment parameter value of at least one facility so as to adapt to the physical sign information of the user.
In an optional embodiment, the processor 111, when obtaining the physical sign information of the user entering the target space, is configured to:
acquiring an image of a user by using image acquisition equipment;
and extracting the sign information of the user from the image of the user.
In an alternative embodiment, the processor 111, when capturing an image of the user with the image capturing device, is configured to:
an image of a user is acquired at an entrance of a target space using an image acquisition device.
In an alternative embodiment, the processor 111, before extracting the physical sign information of the user from the image of the user, is further configured to:
identifying the identity of the user to determine whether the user is a registered user;
if the user is a registered user, extracting sign information from user information corresponding to the registered user;
and if the user is a non-registered user, executing the operation of extracting the physical sign information of the user from the image of the user.
In an optional embodiment, the processor 111, when determining the adjustment parameter value of the at least one facility in the target space according to the physical sign information of the user, is configured to:
respectively determining the ergonomic requirements of the user under at least one facility according to the physical sign information of the user;
and respectively determining the adjustment parameter value corresponding to each facility according to the ergonomic requirement corresponding to each facility.
In an alternative embodiment, each of the at least one facility has a driving component associated therewith, and the processor 111, when adjusting the operating state of each of the at least one facility according to the adjustment parameter value of the at least one facility, is configured to:
respectively generating control commands corresponding to at least one facility according to the adjustment parameter values of the at least one facility;
and respectively sending a control command corresponding to each of the at least one facility to the drive assembly associated with each of the at least one facility so as to control the drive assembly associated with each of the at least one facility to adjust the working state of the at least one facility according to the adjustment parameter value of the at least one facility.
In an alternative embodiment, the facility is a chair, a sofa, a camera, a display screen, a table, or a microphone.
In an alternative embodiment, the type of adjustment parameter includes one or more of height, inclination or stiffness.
In an alternative embodiment, the vital sign information includes one or more of height, weight, or gender information.
In an alternative embodiment, the target space is a restaurant, a conference room, a chat room, or an entertainment room.
It should be noted that the technical details related to the embodiments of the computing device described above can be referred to the related description related to the controller in the related embodiment of the control system shown in fig. 5, and for the sake of brevity, the details are not described herein again, but this should not cause the loss of the scope of protection of the present application.
Further, as shown in fig. 11, the computing device further includes: communication component 112, power component 113, and the like. Only some of the components are schematically shown in fig. 11, and the computing device is not meant to include only the components shown in fig. 11.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program can implement the steps that can be executed by a computing device in the foregoing method embodiments when executed.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the order of the operations, such as 800, 801, etc., is merely used for distinguishing between different operations, and the order itself does not represent any execution order. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
The memory of fig. 7, 9, and 11, among other things, is used to store computer programs and may be configured to store other various data to support operations on the computing platform. Examples of such data include instructions for any application or method operating on the computing platform, contact data, phonebook data, messages, pictures, videos, and so forth. The memory may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The communication components of fig. 7, 9 and 11 are configured to facilitate wired or wireless communication between the device in which the communication component is located and other devices. The device where the communication component is located can access a wireless network based on a communication standard, such as a WiFi, a 2G, 3G, 4G/LTE, 5G and other mobile communication networks, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
The power supply components of figures 7, 9 and 11, among other things, provide power to the various components of the device in which the power supply components are located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (37)

1. A method for controlling a carrier, comprising:
determining a sign difference between at least two users in the same behavior scene;
respectively determining adjustment parameter values corresponding to behavior carriers used by the at least two users according to the sign difference;
adjusting the working states of the behavior carriers according to the adjustment parameter values so as to enable the behavior states of the at least two users to be mutually adaptive;
wherein the working state of the behavior carrier affects the behavior state of the user carried by the behavior carrier.
2. The method of claim 1, wherein determining a sign difference between at least two users in a same behavioral scenario comprises:
acquiring images of the at least two users by using image acquisition equipment;
extracting respective sign information of the at least two users from the images of the at least two users;
and determining the sign difference between the at least two users according to the respective sign information of the at least two users.
3. The method of claim 2, wherein said capturing images of the at least two users with an image capture device comprises:
and respectively acquiring the images of the at least two users at the entrance of the environment space corresponding to the behavior scene by using the image acquisition equipment.
4. The method of claim 2, wherein said capturing images of the at least two users with an image capture device comprises:
in the initial stage that the at least two users are loaded on the corresponding behavior carriers, acquiring images containing the at least two users by using the image acquisition equipment;
and in the initial stage that the at least two users are loaded on the behavior carriers corresponding to the at least two users, each behavior carrier is in an initial working state.
5. The method according to claim 2, wherein before extracting the respective sign information of the at least two users from the images of the at least two users, further comprising:
performing identity recognition on the at least two users to determine whether a registered user exists in the at least two users;
aiming at a registered user, extracting sign information from user information corresponding to the registered user;
and for the non-registered user, executing the operation of extracting the respective sign information of the at least two users from the images of the at least two users.
6. The method according to claim 1, wherein before determining, according to the sign difference, adjustment parameter values corresponding to behavior vectors used by the at least two users, respectively, further comprises:
in a preparation stage of the behavior scene, determining relative positions between the at least two users and each behavior carrier;
determining behavior carriers selected by the at least two users according to the relative positions of the at least two users and the behavior carriers;
wherein, in the preparation phase, the at least two users are respectively in position to the selected behavior carrier.
7. The method according to claim 1, wherein the determining, according to the sign difference, the adjustment parameter values corresponding to the behavior carriers respectively used by the at least two users includes:
acquiring respective sign information of the at least two users, and determining respective ergonomic requirements of the at least two users according to the sign information;
respectively determining the numerical value ranges of the adjustment parameters meeting the ergonomic requirements of the users aiming at the behavior carriers used by the at least two users respectively;
and selecting a target adjustment parameter value from the numerical range of the adjustment parameters corresponding to the behavior carriers used by the at least two users respectively as the adjustment parameter value corresponding to the behavior carrier used by the at least two users respectively, so that the behavior states of the at least two users are mutually adapted as a target.
8. The method of claim 1, wherein each behavioral carrier is associated with a driving component, and the adjusting the operating state of each behavioral carrier according to the adjustment parameter value comprises:
respectively generating control commands corresponding to the behavior carriers according to the adjustment parameter values corresponding to the behavior carriers used by the at least two users;
and respectively sending the control commands corresponding to the behavior carriers to the driving components associated with the behavior carriers so as to control the driving components to adjust the working states of the behavior carriers according to the adjustment parameter values corresponding to the behavior carriers.
9. The method of claim 1, wherein the behavior carrier comprises one or more of a chair, a table, a sofa, or a lift table;
the type of the adjustment parameter comprises one or more of height, inclination or hardness;
the signs include one or more of height, weight, or gender.
10. The method of claim 1, wherein the behavioral scenario includes one or more of a chat scenario, a dining scenario, a meeting scenario, or an entertainment scenario.
11. A facility control method, characterized by comprising:
determining a sign difference between at least two users using the same target facility;
determining an adjustment parameter value corresponding to the target facility according to the sign difference between the at least two users;
and adjusting the working state of the target facility according to the adjustment parameter value so as to adapt to the physical characteristics of the at least two users.
12. The method of claim 11, wherein determining a difference in signs between at least two users using the same target facility comprises:
acquiring images of the at least two users by using image acquisition equipment;
extracting respective sign information of the at least two users from the images of the at least two users;
and determining the sign difference between the at least two users according to the respective sign information of the at least two users.
13. The method of claim 12, wherein said capturing images of the at least two users with an image capture device comprises:
and respectively collecting the images of the at least two users at the entrance of the environment space where the target equipment is positioned by utilizing the image collecting equipment.
14. The method of claim 12, wherein said capturing images of the at least two users with an image capture device comprises:
acquiring an image containing the at least two users with the image acquisition device with the at least two users in place to respective facility use locations.
15. The method according to claim 12, wherein before extracting the respective sign information of the at least two users from the images of the at least two users, further comprising:
performing identity recognition on the at least two users to determine whether a registered user exists in the at least two users;
aiming at a registered user, extracting sign information from user information corresponding to the registered user;
and for the non-registered user, executing the operation of extracting the respective sign information of the at least two users from the images of the at least two users.
16. The method according to claim 11, wherein determining the adjustment parameter value corresponding to the target facility according to the sign difference between at least two users comprises:
acquiring respective sign information of the at least two users, and determining respective ergonomic requirements of the at least two users according to the sign information;
respectively determining the numerical ranges of the adjustment parameters meeting the ergonomic requirements of the at least two users;
respectively selecting target adjustment parameter values from the numerical ranges of the adjustment parameters corresponding to the at least two users with the physical signs adapted to the at least two users as targets;
and determining an adjusting parameter value corresponding to the target facility according to the target adjusting parameter value corresponding to each of the at least two users.
17. The method of claim 11, wherein a drive assembly is associated with the target facility, and wherein adjusting the operating state of the target facility in accordance with the adjustment parameter value comprises:
generating a control command according to the adjustment parameter value corresponding to the target facility;
and sending the control command to a driving component associated with the target facility so as to control the driving component to adjust the working state of the target facility according to the adjustment parameter value corresponding to the target facility.
18. The method of claim 11, wherein the target facility comprises one or more of a camera, a display screen, a table, or a microphone;
the type of the adjustment parameter comprises one or more of height, inclination or hardness;
the signs include one or more of height, weight, or gender.
19. A facility control method, characterized by comprising:
acquiring sign information of a user entering a target space;
determining an adjustment parameter value of at least one facility in the target space according to the physical sign information of the user;
and respectively adjusting the working state of the at least one facility according to the adjustment parameter value of the at least one facility so as to adapt to the physical sign information of the user.
20. The method of claim 19, wherein the obtaining of the sign information of the user entering the target space comprises:
acquiring an image of the user by using an image acquisition device;
and extracting sign information of the user from the image of the user.
21. The method of claim 20, wherein said capturing an image of the user with an image capture device comprises:
and acquiring the image of the user at the entrance of the target space by using the image acquisition equipment.
22. The method of claim 20, wherein before extracting the sign information of the user from the image of the user, further comprising:
identifying the identity of the user to determine whether the user is a registered user;
if the user is a registered user, extracting sign information from user information corresponding to the registered user;
and if the user is a non-registered user, executing the operation of extracting the sign information of the user from the image of the user.
23. The method according to claim 19, wherein the determining an adjustment parameter value of at least one facility in the target space according to the sign information of the user comprises:
respectively determining the ergonomic requirements of the user under at least one facility according to the physical sign information of the user;
and respectively determining the adjustment parameter values corresponding to the at least one facility according to the ergonomic requirements corresponding to the at least one facility.
24. The method of claim 19, wherein each of the at least one facility has associated therewith a drive assembly, and wherein adjusting the operating state of each of the at least one facility in accordance with the adjusted parameter value for the at least one facility comprises:
respectively generating control commands corresponding to the at least one facility according to the adjustment parameter values of the at least one facility;
and respectively sending the control command corresponding to each of the at least one facility to the drive assembly associated with each of the at least one facility so as to control the drive assembly associated with each of the at least one facility to adjust the working state of the at least one facility according to the adjustment parameter value of the at least one facility.
25. The method of claim 19, wherein the facility is a chair, a sofa, a camera, a display screen, a table, or a microphone;
the type of the adjustment parameter comprises one or more of height, inclination or hardness;
the sign information comprises one or more of height, weight or gender information;
the target space is a restaurant, a conference room, a chat room, or an entertainment room.
26. A control system, comprising: a controller and at least two behavior vectors;
the at least two behavior carriers are used for bearing users;
the controller is used for determining sign differences between at least two users in the same behavior scene; respectively determining adjustment parameter values corresponding to behavior carriers used by the at least two users according to the sign difference; adjusting the working states of the behavior carriers according to the adjustment parameter values so as to enable the behavior states of the at least two users to be mutually adaptive;
wherein the working state of the behavior carrier influences the behavior state of the user carried by the behavior carrier.
27. The system of claim 26, further comprising a detection component;
the detection component is used for detecting sign information of at least two users in the same behavior scene and providing the sign information to the controller, so that the controller can determine sign difference between the at least two users.
28. The system of claim 26, wherein the at least two behavior carriers each have a drive assembly associated therewith;
the controller is used for respectively generating control commands corresponding to the behavior carriers according to the adjustment parameter values corresponding to the behavior carriers used by the at least two users; and respectively sending the control commands corresponding to the behavior carriers to the driving components associated with the behavior carriers so as to control the driving components to adjust the working states of the behavior carriers according to the adjustment parameter values corresponding to the behavior carriers.
29. An intelligent facility is characterized by comprising a facility body, a processor and a driving assembly;
the processor is used for determining sign differences among users in a behavior scene containing the users carried by the intelligent facility; determining an adjusting parameter value according to the sign difference;
and adjusting the working state of the intelligent facility by using the driving component according to the adjustment parameter value so as to enable the behavior state of the user borne by the intelligent facility to be matched with the behavior states of other users in the behavior scene.
30. The smart appliance of claim 29, wherein the processor is specifically configured to:
acquiring sign information of each user in the behavior scene by using a detection assembly corresponding to the behavior scene;
and determining the sign difference between the at least two users according to the respective sign information of the at least two users.
31. A control system, comprising: a controller and a target facility;
the target facility is used for providing facility service for the user;
the controller is configured to determine a sign difference between at least two users using the same target facility; determining an adjustment parameter value corresponding to the target facility according to the sign difference between the at least two users; and adjusting the working state of the target facility according to the adjustment parameter value so as to adapt to the physical signs of the at least two users.
32. An intelligent facility is characterized by comprising a facility body, a processor and a driving assembly;
the processor is configured to determine a sign difference between at least two users using the smart appliance; determining an adjusting parameter value according to the sign difference;
and adjusting the working state of the intelligent facility by utilizing the driving component according to the adjustment parameter value so as to adapt to the physical signs of the at least two users.
33. A control system, comprising: a controller and at least one facility located in a target space;
the at least one facility is used for providing facility service for the user;
the controller is used for acquiring sign information of a user entering a target space; determining an adjustment parameter value of the at least one facility according to the physical sign information of the user; and respectively adjusting the working state of the at least one facility according to the adjustment parameter value of the at least one facility so as to adapt to the physical sign information of the user.
34. A computing device comprising a memory and a processor;
the memory is to store one or more computer instructions;
the processor is coupled with the memory for executing the one or more computer instructions for:
determining a sign difference between at least two users in the same behavior scene;
respectively determining adjustment parameter values corresponding to behavior carriers used by the at least two users according to the sign difference;
adjusting the working states of the behavior carriers according to the adjustment parameter values so as to enable the behavior states of the at least two users to be mutually adaptive;
wherein the working state of the behavior carrier influences the behavior state of the user carried by the behavior carrier.
35. A computing device comprising a memory and a processor;
the memory is to store one or more computer instructions;
the processor is coupled with the memory for executing the one or more computer instructions for:
determining a sign difference between at least two users using the same target facility;
determining an adjustment parameter value corresponding to the target facility according to the sign difference between the at least two users;
and adjusting the working state of the target facility according to the adjustment parameter value so as to adapt to the physical signs of the at least two users.
36. A computing device comprising a memory and a processor;
the memory is to store one or more computer instructions;
the processor is coupled with the memory for executing the one or more computer instructions for:
acquiring sign information of a user entering a target space;
determining an adjustment parameter value of at least one facility in the target space according to the physical sign information of the user;
and respectively adjusting the working state of the at least one facility according to the adjustment parameter value of the at least one facility so as to adapt to the physical sign information of the user.
37. A computer-readable storage medium storing computer instructions, which when executed by one or more processors, cause the one or more processors to perform the carrier control method of any one of claims 1-10 or the facility control method of any one of claims 11-25.
CN202010093466.6A 2020-02-14 2020-02-14 Carrier, facility control method, equipment, system and storage medium Pending CN113268014A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010093466.6A CN113268014A (en) 2020-02-14 2020-02-14 Carrier, facility control method, equipment, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010093466.6A CN113268014A (en) 2020-02-14 2020-02-14 Carrier, facility control method, equipment, system and storage medium

Publications (1)

Publication Number Publication Date
CN113268014A true CN113268014A (en) 2021-08-17

Family

ID=77227309

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010093466.6A Pending CN113268014A (en) 2020-02-14 2020-02-14 Carrier, facility control method, equipment, system and storage medium

Country Status (1)

Country Link
CN (1) CN113268014A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160089294A1 (en) * 2014-09-26 2016-03-31 Marc Jonas Guillaume Dual-chair assembly
US20160260019A1 (en) * 2015-03-03 2016-09-08 Carlos Riquelme Ruiz Smart office desk interactive with the user
CN106724376A (en) * 2016-12-06 2017-05-31 南京九致信息科技有限公司 Height Adjustable intelligent table, chair and height adjusting method
CN108965443A (en) * 2018-07-23 2018-12-07 广州维纳斯家居股份有限公司 Intelligent elevated table height adjusting method, device, intelligent elevated table and storage medium
CN208541010U (en) * 2017-10-30 2019-02-26 北京广研广播电视高科技中心 A kind of showing stand for client connection
CN110693654A (en) * 2019-10-15 2020-01-17 北京小米移动软件有限公司 Method and device for adjusting intelligent wheelchair and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160089294A1 (en) * 2014-09-26 2016-03-31 Marc Jonas Guillaume Dual-chair assembly
US20160260019A1 (en) * 2015-03-03 2016-09-08 Carlos Riquelme Ruiz Smart office desk interactive with the user
CN106724376A (en) * 2016-12-06 2017-05-31 南京九致信息科技有限公司 Height Adjustable intelligent table, chair and height adjusting method
CN208541010U (en) * 2017-10-30 2019-02-26 北京广研广播电视高科技中心 A kind of showing stand for client connection
CN108965443A (en) * 2018-07-23 2018-12-07 广州维纳斯家居股份有限公司 Intelligent elevated table height adjusting method, device, intelligent elevated table and storage medium
CN110693654A (en) * 2019-10-15 2020-01-17 北京小米移动软件有限公司 Method and device for adjusting intelligent wheelchair and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郭园;时新;郭晨旭;申黎明;: "基于计算机视觉的桌椅人机适应性应用研究", 装饰, no. 12 *

Similar Documents

Publication Publication Date Title
KR101803081B1 (en) Robot for store management
US9007473B1 (en) Architecture for augmented reality environment
CN102378097A (en) Microphone control system and method
CN104582187A (en) Recording and lamplight control system and method based on face recognition and facial expression recognition
CN102333177B (en) Photographing support system, photographing support method, server and photographing apparatus
CN103945121A (en) Information processing method and electronic equipment
US10084970B2 (en) System and method for automatically generating split screen for a video of a dynamic scene
CN105892854A (en) Photographing parameter menu loading method and device
CN106227059A (en) Intelligent home furnishing control method based on indoor threedimensional model and equipment
CN108644976B (en) Control method and device of air conditioner
CN113268014A (en) Carrier, facility control method, equipment, system and storage medium
CN111756992A (en) Wearable device follow-up shooting method and wearable device
KR101077267B1 (en) Stenography Input System And Method For Conference Using Face Recognition
US11057563B2 (en) Image pickup device and method for controlling same
JP2021105939A (en) Office layout presenting apparatus, office layout presenting method, and program
US20170371237A1 (en) Projection method and device for robot
KR102229034B1 (en) Apparatus and method for creating information related to facial expression and apparatus for creating facial expression
WO2012151395A2 (en) Providing an adaptive media experience
US10796106B2 (en) Apparatus and method for selecting speaker by using smart glasses
CN105652705A (en) Cab state regulation and control method and system thereof
CN108182588A (en) A kind of hair style design and clipping device, system and method, equipment and medium
KR101680524B1 (en) System for displaying speaker in conference room and control method thereof
CN114264055B (en) Temperature regulation method, device, storage medium and equipment
CN113536274B (en) Industrial design service system
CN104460323A (en) Intelligent furniture and method and device for analyzing health condition of user through intelligent furniture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination