CN112822471B - Projection control method, intelligent robot and related products - Google Patents

Projection control method, intelligent robot and related products Download PDF

Info

Publication number
CN112822471B
CN112822471B CN202011642861.1A CN202011642861A CN112822471B CN 112822471 B CN112822471 B CN 112822471B CN 202011642861 A CN202011642861 A CN 202011642861A CN 112822471 B CN112822471 B CN 112822471B
Authority
CN
China
Prior art keywords
target
user
image
parameter
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011642861.1A
Other languages
Chinese (zh)
Other versions
CN112822471A (en
Inventor
傅峰峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Fugang Life Intelligent Technology Co Ltd
Original Assignee
Guangzhou Fugang Life Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Fugang Life Intelligent Technology Co Ltd filed Critical Guangzhou Fugang Life Intelligent Technology Co Ltd
Priority to CN202011642861.1A priority Critical patent/CN112822471B/en
Publication of CN112822471A publication Critical patent/CN112822471A/en
Application granted granted Critical
Publication of CN112822471B publication Critical patent/CN112822471B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Abstract

The embodiment of the application discloses a projection control method, an intelligent robot and related products, which are applied to the intelligent robot, wherein the method comprises the following steps: acquiring target action parameters of a user; determining a target projection control parameter corresponding to the target action parameter; and realizing projection operation based on the target projection control parameters. By adopting the embodiment of the application, the intelligence of the intelligent robot can be improved.

Description

Projection control method, intelligent robot and related products
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a projection control method, an intelligent robot and related products.
Background
The intelligent robot can understand human language, converse with the operator by human language, and separately form an external environment-actual situation detailed mode which enables the intelligent robot to ' live ' in self ' consciousness. It can analyze the situation that occurs, can adjust its own actions to meet all the requirements set by the operator, can formulate the desired actions, and accomplish them under conditions of insufficient information and rapid environmental changes. Of course, this is not possible to do as it is our human mind. However, there are still attempts to establish some kind of "micro-world" that computers can understand. In life, the intelligent robot also needs to learn continuously to improve the ability of the intelligent robot, and therefore, the problem of how to realize the intelligence of the intelligent robot needs to be solved urgently.
Disclosure of Invention
The embodiment of the application provides a projection control method, an intelligent robot and a related product, and the intelligence of the intelligent robot can be improved.
In a first aspect, an embodiment of the present application provides a projection control method, which is applied to an intelligent robot, and the method includes:
acquiring target action parameters of a user;
determining a target projection control parameter corresponding to the target action parameter;
and realizing projection operation based on the target projection control parameters.
Optionally, the obtaining target action parameters of the user includes:
acquiring a user image of the user;
performing image segmentation on the user image to obtain a user area image;
and identifying the user area image to obtain the target action parameter.
Optionally, the acquiring the user image of the user includes:
detecting whether the user is in a preset area;
when the user is in the preset area, acquiring target environment parameters;
determining reference shooting parameters corresponding to the target environment parameters according to a mapping relation between preset environment parameters and the shooting parameters;
determining a target action amplitude of the user;
determining a target fine tuning coefficient corresponding to the target action amplitude according to a mapping relation between preset action amplitude and the fine tuning coefficient;
adjusting the reference shooting parameters according to the target fine tuning coefficient to obtain target shooting parameters;
and shooting the user according to the target shooting parameters to obtain a user image of the user.
Optionally, the identifying the user area image to obtain the target motion parameter includes:
carrying out feature extraction on the user area image to obtain a target feature set;
inputting the target feature set into a preset neural network model to obtain a target operation result;
and determining the target action parameters corresponding to the target operation result according to a preset mapping relation between the operation result and the action parameters.
Optionally, the target action parameter includes a target action type and a target action similarity;
the determining of the target projection control parameter corresponding to the target motion parameter includes:
determining a target operation instruction corresponding to the target action type according to a mapping relation between a preset action type and the operation instruction;
acquiring a reference projection control parameter corresponding to the target operation instruction;
determining a target adjusting coefficient corresponding to the target action similarity according to a preset mapping relation between the action similarity and the adjusting coefficient;
and adjusting the reference projection control parameter according to the target adjustment coefficient to obtain the target projection control parameter.
Optionally, after the obtaining of the target motion parameter of the user and before the determining of the target projection control parameter corresponding to the target motion parameter, the method further includes:
comparing the target action parameter with a preset action parameter;
and when the target action parameter is successfully compared with the preset action parameter, executing the step of determining the target projection control parameter corresponding to the target action parameter.
In a second aspect, an embodiment of the present application provides a projection control apparatus, which is applied to an intelligent robot, and the apparatus includes: an acquisition unit, a determination unit and a projection unit, wherein,
the acquisition unit is used for acquiring target action parameters of a user;
the determining unit is used for determining a target projection control parameter corresponding to the target action parameter;
and the projection unit is used for realizing projection operation based on the target projection control parameter.
Optionally, in the aspect of acquiring the target action parameter of the user, the acquiring unit is specifically configured to:
acquiring a user image of the user;
carrying out image segmentation on the user image to obtain a user area image;
and identifying the user area image to obtain the target action parameter.
In a third aspect, the present application provides an intelligent robot, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing the steps in the first aspect of the present application.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program enables a computer to perform some or all of the steps described in the first aspect of the embodiment of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps as described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
The embodiment of the application has the following beneficial effects:
it can be seen that the projection control method, the intelligent robot and the related products described in the embodiments of the present application are applied to an intelligent robot, obtain a target action parameter of a user, determine a target projection control parameter corresponding to the target action parameter, and implement a projection operation based on the target projection control parameter, so that a user action can be identified, and a projection control parameter corresponding to the user action is determined to complete a corresponding projection operation, which is helpful for improving intelligence of the intelligent robot.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1A is a schematic structural diagram of an intelligent robot provided in an embodiment of the present application;
fig. 1B is a schematic flowchart of a projection control method according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of another projection control method provided in an embodiment of the present application;
FIG. 3 is a schematic structural diagram of another intelligent robot provided in an embodiment of the present application;
fig. 4 is a block diagram of functional units of a projection control apparatus according to an embodiment of the present disclosure.
Detailed Description
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to make the technical solutions of the present application better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The intelligent robot related to the embodiment of the application may be an Automated Guided Vehicle (AGV) robot, and taking an AGV car as an example, the AGV may refer to a transport Vehicle equipped with an electromagnetic or optical automatic navigation device, capable of traveling along a prescribed navigation path, and having safety protection and various transfer functions. The industrial application does not need a driver's transport vehicle, and a rechargeable storage battery is used as a power source of the industrial application. The traveling path and behavior can be controlled by a computer, or the traveling path can be established by an electromagnetic track (electromagnetic track-following system), the electromagnetic track is adhered to the floor, and the unmanned transportation vehicle moves and acts according to the information brought by the electromagnetic track.
As shown in fig. 1A, fig. 1A is a schematic structural diagram of an intelligent robot provided in an embodiment of the present application. The intelligent robot comprises a processor, a Memory, a signal processor, a transceiver, a display screen, a loudspeaker, a microphone, a Random Access Memory (RAM), a camera, a sensor, a network module and the like. The memory, the DSP, the loudspeaker, the microphone, the RAM, the camera, the sensor and the network module are connected with the processor, and the transceiver is connected with the signal processor.
The Processor is a control center of the intelligent robot, connects each part of the whole intelligent robot by using various interfaces and lines, executes various functions and Processing data of the intelligent robot by running or executing software programs and/or modules stored in the memory and calling data stored in the memory, thereby performing overall monitoring on the intelligent robot, and can be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU) or a Network Processing Unit (NPU).
Further, the processor may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor.
The storage is used for storing software programs and/or modules, and the processor executes various functional applications and projection control of the intelligent robot by running the software programs and/or modules stored in the storage. The memory mainly comprises a program storage area and a data storage area, wherein the program storage area can store an operating system, a software program required by at least one function and the like; the storage data area may store data created according to the use of the intelligent robot, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
Wherein the sensor comprises at least one of: light-sensitive sensors, gyroscopes, infrared proximity sensors, vibration detection sensors, pressure sensors, etc. Among them, the light sensor, also called an ambient light sensor, is used to detect the ambient light brightness. The light sensor may include a light sensitive element and an analog to digital converter. The photosensitive element is used for converting collected optical signals into electric signals, and the analog-to-digital converter is used for converting the electric signals into digital signals. Optionally, the light sensor may further include a signal amplifier, and the signal amplifier may amplify the electrical signal converted by the photosensitive element and output the amplified electrical signal to the analog-to-digital converter. The photosensitive element may include at least one of a photodiode, a phototransistor, a photoresistor, and a silicon photocell.
The camera may be a visible light camera (a general view angle camera, a wide angle camera), an infrared camera, or a dual camera (having a distance measurement function), which is not limited herein.
The network module may be at least one of: a bluetooth module, a wireless fidelity (Wi-Fi), etc., which are not limited herein.
Based on the intelligent robot described in fig. 1A, the following projection control method can be executed, and the specific steps are as follows:
acquiring target action parameters of a user;
determining a target projection control parameter corresponding to the target action parameter;
and realizing projection operation based on the target projection control parameters.
It can be seen that, the intelligent robot described in the embodiment of the present application obtains the target motion parameter of the user, determines the target projection control parameter corresponding to the target motion parameter, and implements the projection operation based on the target projection control parameter, so that the user motion can be identified, and the projection control parameter corresponding to the user motion can be determined, so as to complete the corresponding projection operation, which is helpful for improving the intelligence of the intelligent robot.
Referring to fig. 1B, fig. 1B is a schematic flowchart of a projection control method according to an embodiment of the present disclosure, and as shown in the drawing, the projection control method is applied to the intelligent robot shown in fig. 1A, and includes:
101. and acquiring target action parameters of the user.
In this embodiment, the target action parameter may be at least one of the following: the motion type, the motion amplitude, the motion direction, the motion part, the motion similarity, and the like, which are not limited herein, wherein the motion type may be a category of motion, and the motion type may be at least one of the following: running, shooting, boxing, bidding, etc., without limitation. The amplitude of the motion may be the degree of motion stretching. The direction of motion can be understood as the direction in which the motion is performed. The action part can be understood as which part is to be acted. The action similarity may be a similarity to a standard action. In specific implementation, the intelligent robot can acquire a user image of a user, analyze the user image, and obtain a target action parameter of the user.
In one possible example, the step 101 of obtaining the target action parameter of the user may include the following steps:
11. acquiring a user image of the user;
12. carrying out image segmentation on the user image to obtain a user area image;
13. and identifying the user area image to obtain the target action parameter.
The intelligent robot can shoot a user to obtain a user image of the user, and the user image not only comprises a target but also comprises a background, so that the user image can be segmented to obtain a user area image, namely an image only comprising limbs of the user, and the user area image can be identified to obtain a corresponding target action parameter.
Further, in a possible example, the step 11 of acquiring the user image of the user may include the following steps:
111. detecting whether the user is in a preset area;
112. when the user is in the preset area, acquiring target environment parameters;
113. determining reference shooting parameters corresponding to the target environment parameters according to a mapping relation between preset environment parameters and the shooting parameters;
114. determining a target action amplitude of the user;
115. determining a target fine tuning coefficient corresponding to the target action amplitude according to a mapping relation between a preset action amplitude and a fine tuning coefficient;
116. adjusting the reference shooting parameters according to the target fine adjustment coefficient to obtain target shooting parameters;
117. and shooting the user according to the target shooting parameters to obtain a user image of the user.
In this embodiment, the environmental parameter may be at least one of the following: ambient brightness, ambient color temperature, magnetic field interference strength, ambient temperature, ambient humidity, weather, atmospheric pressure, and the like, without limitation. In specific implementation, the intelligent robot may include an environmental sensor, and environmental parameters may be collected by the environmental sensor, and the environmental sensor may be at least one of the following: an ambient brightness detection sensor, a magnetic field detection sensor, a temperature sensor, a humidity sensor, a weather sensor, etc., and is not limited thereto. The intelligent robot can pre-store the mapping relation between the preset environment parameters and the shooting parameters and the mapping relation between the preset action amplitude and the fine adjustment coefficient. The preset area may be set by the user or by default.
In the specific implementation, the intelligent robot may detect whether the user is in a preset area through the distance sensor, obtain a target environment parameter when the user is in the preset area, further determine a reference shooting parameter corresponding to the target environment parameter according to a mapping relationship between the preset environment parameter and the shooting parameter, perform coarse identification through a preview image or a preview video to determine a target action amplitude of the user, determine a target fine-tuning coefficient corresponding to the target action amplitude according to a mapping relationship between the preset action amplitude and the fine-tuning coefficient, where the fine-tuning coefficient may be-0.1 to 0.1, for example, the fine-tuning coefficient may be-0.05 to 0.05, and then adjust the reference shooting parameter according to the target fine-tuning coefficient to obtain the target shooting parameter, where the specific calculation formula is as follows:
target shooting parameter = reference shooting parameter (1 + target fine adjustment coefficient)
The shooting parameters can be adjusted according to the action amplitude of the user, image shaking or blurring can be avoided, then the user is shot according to the target shooting parameters, the user image of the user is obtained, therefore, the image which is consistent with the environment and the user action can be shot, and follow-up accurate obtaining of the action parameters is facilitated.
Further, in a possible example, after the step 12 performs image segmentation on the user image to obtain a user area image, and before the step 13 identifies the user area image to obtain the target motion parameter, the method may further include the following steps:
a1, evaluating the image quality of the user area image to obtain an image quality evaluation value;
a2, when the image quality evaluation value is larger than a preset threshold value, the step of identifying the user area image to obtain the target action parameter is executed;
a3, when the image quality evaluation value is smaller than or equal to the preset threshold value, determining a target image enhancement parameter corresponding to the target shooting parameter;
a4, carrying out image enhancement processing on the user image according to the target image enhancement parameter;
in step 13, the user area image is recognized to obtain the target motion parameter, which may be implemented as follows:
and identifying the user area image subjected to image enhancement processing to obtain the target action parameter.
Wherein, the preset threshold value can be set by the user or the default of the system. In specific implementation, the intelligent robot may perform image quality evaluation on the user area image by using at least one image quality evaluation parameter to obtain an image quality evaluation value, where the image quality evaluation parameter may be at least one of the following: information entropy, signal-to-noise ratio, average gradient, etc., and are not limited herein.
Further, when the image quality evaluation value is greater than the preset threshold, the intelligent robot may perform step 13, and when the image quality evaluation value is less than or equal to the preset threshold, the target image enhancement parameter corresponding to the target shooting parameter may be determined according to a mapping relationship between the preset shooting parameter and the image enhancement parameter. In this embodiment of the present application, the image enhancement parameter may be at least one of: an image enhancement algorithm, control parameters of the image enhancement algorithm, the image enhancement algorithm may be at least one of: wavelet transformation, neural network algorithm, gray stretching, histogram equalization, etc., without limitation, control parameters of the image enhancement algorithm are used to adjust the degree of image enhancement. The intelligent robot can also perform image enhancement processing on the user image according to the target image enhancement parameter, and identify the user region image after the image enhancement processing to obtain the target action parameter, so that the accuracy of obtaining the action parameter is improved.
Further, in a possible example, in the step A1, performing image quality evaluation on the user area image to obtain an image quality evaluation value, the method may include the following steps:
a11, determining the information entropy of the user area image;
a12, dividing the user area image into a plurality of areas;
a13, determining the signal-to-noise ratio of each region in the plurality of regions to obtain a plurality of signal-to-noise ratios;
a14, determining target mean square deviations of the plurality of signal-to-noise ratios;
a15, determining a target fluctuation coefficient corresponding to the target mean square error according to a preset mapping relation between the mean square error and the fluctuation coefficient;
a16, determining a reference information entropy according to the target fluctuation coefficient and the information entropy;
and A17, determining the image quality evaluation value corresponding to the reference information entropy according to the mapping relation between the preset information entropy and the image quality evaluation value.
In specific implementation, a mapping relation between a preset mean square error and a fluctuation coefficient and a mapping relation between a preset information entropy and an image quality evaluation value may be stored in the intelligent robot in advance.
Specifically, the intelligent robot may determine the information entropy of the user area image, may further divide the user area image into a plurality of areas, determine a signal-to-noise ratio of each area in the plurality of areas, obtain a plurality of signal-to-noise ratios, determine a target mean square error of the plurality of signal-to-noise ratios, determine a target fluctuation coefficient corresponding to the target mean square error according to a mapping relationship between a preset mean square error and a fluctuation coefficient, where a value range of the fluctuation coefficient is 0 to 1, and the mean square error reflects a correlation and a fluctuation condition between the areas, and further, the intelligent robot may determine the reference information entropy according to the target fluctuation coefficient and the information entropy, and a specific calculation formula is as follows:
reference information entropy = (1-target fluctuation coefficient) × information entropy
Furthermore, the intelligent robot can determine the image quality evaluation value corresponding to the reference information entropy according to the mapping relation between the preset information entropy and the image quality evaluation value, and thus, accurate image quality evaluation can be realized.
In one possible example, in step 13, the recognizing the user area image to obtain the target motion parameter may include the following steps:
131. carrying out feature extraction on the user area image to obtain a target feature set;
132. inputting the target feature set into a preset neural network model to obtain a target operation result;
133. and determining the target action parameters corresponding to the target operation result according to a preset mapping relation between the operation result and the action parameters.
The preset neural network model can be used for a user to train the intelligent robot, and in the embodiment of the application, the preset neural network model can be at least one of the following models: a recurrent neural network model, a convolutional neural network model, a fully-connected neural network model, a spiking neural network model, etc., without limitation. The intelligent robot can pre-store the mapping relation between the preset operation result and the action parameter.
In a specific implementation, the intelligent robot may perform feature extraction on the user area image to obtain a target feature set, where the target feature set may include one or more features, and the features may be at least one of the following: feature points, feature profiles, feature lines, feature vectors, and the like, without limitation. Furthermore, the intelligent robot may input the target feature set into a preset neural network model to obtain a target operation result, in this embodiment, the operation result may be a probability value, or a tag, and the tag may be used to indicate motion-related information, where the motion-related information may be at least one of: the type of motion, the location of the motion, the magnitude of the motion, the similarity of the motion, etc., are not limited herein. Furthermore, the intelligent robot can determine the target action parameters corresponding to the target operation results according to the preset mapping relation between the operation results and the action parameters, so that the action information of the intelligent robot can be determined according to the characteristics in the images, the action recognition of a user is realized, and the user experience is facilitated to be improved.
102. And determining target projection control parameters corresponding to the target action parameters.
In this embodiment, the projection control parameter may be at least one of the following: projection position, projection brightness, font size, resolution, projection color temperature, etc., without limitation. The intelligent robot can pre-store the mapping relation between the preset action parameters and the projection control parameters, and further can determine the target projection control parameters corresponding to the target action parameters according to the mapping relation.
In one possible example, the target action parameters include a target action type and a target action similarity;
in the step 102, determining the target projection control parameter corresponding to the target motion parameter may include the following steps:
21. determining a target operation instruction corresponding to the target action type according to a mapping relation between a preset action type and the operation instruction;
22. acquiring a reference projection control parameter corresponding to the target operation instruction;
23. determining a target adjusting coefficient corresponding to the target action similarity according to a preset mapping relation between the action similarity and the adjusting coefficient;
24. and adjusting the reference projection control parameter according to the target adjustment coefficient to obtain the target projection control parameter.
The target action parameters may include a target action type and a target action similarity. The intelligent robot can pre-store the mapping relation between the preset action type and the operation instruction, and the mapping relation between the preset action similarity and the adjustment coefficient.
In the specific implementation, the intelligent robot may determine a target operation instruction corresponding to the target action type according to a mapping relationship between preset action types and operation instructions, obtain a reference projection control parameter corresponding to the target operation instruction, further determine a target adjustment coefficient corresponding to the target action similarity according to a mapping relationship between preset action similarities and adjustment coefficients, further adjust the reference projection control parameter according to the target adjustment coefficient, and obtain a target projection control parameter, where the specific calculation formula is as follows:
target projection control parameter = reference projection control parameter h (1 + target adjustment factor)
Furthermore, the intelligent robot can determine projection control parameters corresponding to the user actions, and the intelligence of the intelligent robot is improved.
103. And realizing projection operation based on the target projection control parameters.
In specific implementation, the intelligent robot may include a projection module, and then, the projection module may be controlled to implement a projection operation with the target projection control parameter.
For example, by taking an AGV smart robot as an example, a Mapping projection technology is implemented at home, a person may execute some actions according to a game prompt (for example, a background wall display screen on a smart cabinet) or a smart terminal (a smart bracelet, etc.) in the projection, and the Mapping technology of the AGV smart robot may project different pictures according to the actions, thereby implementing somatosensory interaction.
In one possible example, after the step 101 of acquiring the target motion parameter of the user and before the step 102 of determining the target projection control parameter corresponding to the target motion parameter, the following steps may be further included:
a1, comparing the target action parameter with a preset action parameter;
and A2, when the target action parameter is successfully compared with the preset action parameter, executing the step of determining the target projection control parameter corresponding to the target action parameter.
The preset action parameters can be stored in the intelligent robot in advance, in specific implementation, the target action parameters and the preset action parameters can be compared, and when the target action parameters and the preset action parameters are successfully compared, the target action parameters are indicated to be the appointed action, namely, only the appointed action triggers the projection operation, so that the phenomenon of false triggering is prevented.
It can be seen that the projection control method described in the embodiment of the present application is applied to an intelligent robot, obtains a target motion parameter of a user, determines a target projection control parameter corresponding to the target motion parameter, and implements a projection operation based on the target projection control parameter, so that a user motion can be identified, and a projection control parameter corresponding to the user motion can be determined, so as to complete a corresponding projection operation, which is beneficial to improving the intelligence of the intelligent robot.
Referring to fig. 2, in accordance with the embodiment shown in fig. 1B, fig. 2 is a schematic flowchart of a projection control method provided in an embodiment of the present application, applied to the intelligent robot shown in fig. 1A, where the projection control method includes:
201. and acquiring target action parameters of the user.
202. And comparing the target action parameter with a preset action parameter.
203. And when the target action parameter is successfully compared with the preset action parameter, determining a target projection control parameter corresponding to the target action parameter.
204. And realizing projection operation based on the target projection control parameters.
For the detailed description of the steps 201 to 204, reference may be made to corresponding steps of the projection control method described in the foregoing fig. 1B, and details are not repeated here.
It can be seen that the projection control method described in the embodiment of the present application is applied to an intelligent robot, obtains a target motion parameter of a user, compares the target motion parameter with a preset motion parameter, determines a target projection control parameter corresponding to the target motion parameter when the target motion parameter is successfully compared with the preset motion parameter, and implements a projection operation based on the target projection control parameter, so that a user motion can be identified, and determines a projection control parameter corresponding to the specified motion to complete a corresponding projection operation, which is helpful for improving intelligence of the intelligent robot.
Referring to fig. 3 in keeping with the above embodiments, fig. 3 is a schematic structural diagram of an intelligent robot according to an embodiment of the present application, and as shown in the drawing, the intelligent robot includes a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and in an embodiment of the present application, the programs include instructions for performing the following steps:
acquiring target action parameters of a user;
determining a target projection control parameter corresponding to the target action parameter;
and realizing projection operation based on the target projection control parameters.
It can be seen that, the intelligent robot described in the embodiment of the present application obtains the target motion parameter of the user, determines the target projection control parameter corresponding to the target motion parameter, and implements the projection operation based on the target projection control parameter, so that the user motion can be identified, and the projection control parameter corresponding to the user motion can be determined, so as to complete the corresponding projection operation, which is helpful for improving the intelligence of the intelligent robot.
In one possible example, in the obtaining of the target action parameter of the user, the program comprises instructions for:
acquiring a user image of the user;
carrying out image segmentation on the user image to obtain a user area image;
and identifying the user area image to obtain the target action parameter.
Further, in one possible example, in said capturing a user image of said user, the above program comprises instructions for performing the steps of:
detecting whether the user is in a preset area;
when the user is in the preset area, acquiring target environment parameters;
determining reference shooting parameters corresponding to the target environment parameters according to a mapping relation between preset environment parameters and the shooting parameters;
determining a target action amplitude of the user;
determining a target fine tuning coefficient corresponding to the target action amplitude according to a mapping relation between preset action amplitude and the fine tuning coefficient;
adjusting the reference shooting parameters according to the target fine adjustment coefficient to obtain target shooting parameters;
and shooting the user according to the target shooting parameters to obtain a user image of the user.
Further, in one possible example, in the identifying the user area image to obtain the target motion parameter, the program includes instructions for performing the following steps:
performing feature extraction on the user area image to obtain a target feature set;
inputting the target feature set into a preset neural network model to obtain a target operation result;
and determining the target action parameters corresponding to the target operation result according to a preset mapping relation between the operation result and the action parameters.
In one possible example, the target action parameters include a target action type and a target action similarity;
in the aspect of determining the target projection control parameter corresponding to the target motion parameter, the program includes instructions for:
determining a target operation instruction corresponding to the target action type according to a mapping relation between a preset action type and the operation instruction;
acquiring a reference projection control parameter corresponding to the target operation instruction;
determining a target adjusting coefficient corresponding to the target action similarity according to a preset mapping relation between the action similarity and the adjusting coefficient;
and adjusting the reference projection control parameter according to the target adjustment coefficient to obtain the target projection control parameter.
In one possible example, after the obtaining of the target motion parameter of the user and before the determining of the target projection control parameter corresponding to the target motion parameter, the program further includes instructions for performing the following steps:
comparing the target action parameter with a preset action parameter;
and when the target action parameter is successfully compared with the preset action parameter, executing the step of determining the target projection control parameter corresponding to the target action parameter.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that in order to implement the above functions, it includes corresponding hardware structures and/or software modules for performing the respective functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the functional units may be divided according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 4 is a block diagram of functional units of a projection control apparatus 400 according to an embodiment of the present application, where the apparatus 400 is applied to an intelligent robot, and the apparatus 400 includes: an acquisition unit 401, a determination unit 402 and a projection unit 403, wherein,
the acquiring unit 401 is configured to acquire a target action parameter of a user;
the determining unit 402 is configured to determine a target projection control parameter corresponding to the target motion parameter;
the projection unit 403 is configured to implement a projection operation based on the target projection control parameter.
It can be seen that the projection control device described in the embodiment of the present application is applied to an intelligent robot, obtains a target motion parameter of a user, determines a target projection control parameter corresponding to the target motion parameter, and implements a projection operation based on the target projection control parameter, so that a user motion can be identified, and a projection control parameter corresponding to the user motion can be determined to complete a corresponding projection operation, which is beneficial to improving the intelligence of the intelligent robot.
In one possible example, in terms of obtaining the target action parameter of the user, the obtaining unit 401 is specifically configured to:
acquiring a user image of the user;
carrying out image segmentation on the user image to obtain a user area image;
and identifying the user area image to obtain the target action parameter.
Further, in a possible example, in terms of the acquiring the user image of the user, the acquiring unit 401 is specifically configured to:
detecting whether the user is in a preset area;
when the user is in the preset area, acquiring target environment parameters;
determining reference shooting parameters corresponding to the target environment parameters according to a mapping relation between preset environment parameters and the shooting parameters;
determining a target action amplitude of the user;
determining a target fine tuning coefficient corresponding to the target action amplitude according to a mapping relation between preset action amplitude and the fine tuning coefficient;
adjusting the reference shooting parameters according to the target fine tuning coefficient to obtain target shooting parameters;
and shooting the user according to the target shooting parameters to obtain a user image of the user.
Further, in a possible example, in the aspect of identifying the user area image to obtain the target motion parameter, the obtaining unit 401 is specifically configured to:
carrying out feature extraction on the user area image to obtain a target feature set;
inputting the target feature set into a preset neural network model to obtain a target operation result;
and determining the target action parameters corresponding to the target operation result according to a preset mapping relation between the operation result and the action parameters.
In one possible example, the target action parameters include a target action type and a target action similarity;
in the aspect of determining the target projection control parameter corresponding to the target motion parameter, the determining unit 402 is specifically configured to:
determining a target operation instruction corresponding to the target action type according to a mapping relation between a preset action type and the operation instruction;
acquiring a reference projection control parameter corresponding to the target operation instruction;
determining a target adjusting coefficient corresponding to the target action similarity according to a preset mapping relation between the action similarity and the adjusting coefficient;
and adjusting the reference projection control parameter according to the target adjustment coefficient to obtain the target projection control parameter.
In one possible example, after the obtaining of the target motion parameter of the user and before the determining of the target projection control parameter corresponding to the target motion parameter, the apparatus 400 is further specifically configured to:
comparing the target action parameter with a preset action parameter;
and when the target action parameter is successfully compared with the preset action parameter, executing the step of determining the target projection control parameter corresponding to the target action parameter.
It can be understood that the functions of each program module of the projection control apparatus in this embodiment may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the related description of the foregoing method embodiment, which is not described herein again.
Embodiments of the present application also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, the computer program enables a computer to execute part or all of the steps of any one of the methods as described in the above method embodiments, and the computer includes an intelligent robot.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an intelligent robot.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be through some interfaces, indirect coupling or communication connection between devices or units, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solutions of the present application, which are essential or part of the technical solutions contributing to the prior art, or all or part of the technical solutions, may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the above methods of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps of the methods of the above embodiments may be implemented by a program, which is stored in a computer-readable memory, the memory including: flash Memory disks, read-Only memories (ROMs), random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing embodiments have been described in detail, and specific examples are used herein to explain the principles and implementations of the present application, where the above description of the embodiments is only intended to help understand the method and its core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (8)

1. A projection control method is applied to an intelligent robot, and comprises the following steps:
acquiring target action parameters of a user, wherein the target action parameters comprise: the motion type, the motion amplitude, the motion direction, the motion part and the motion similarity, wherein the motion type comprises at least one of the following components: running, shooting, boxing, bixin;
determining a target projection control parameter corresponding to the target action parameter;
implementing a projection operation based on the target projection control parameter;
the obtaining of the target action parameter of the user includes:
acquiring a user image of the user;
carrying out image segmentation on the user image to obtain a user area image;
identifying the user area image to obtain the target action parameter;
after the image segmentation is performed on the user image to obtain a user area image, and before the image identification is performed on the user area image to obtain the target action parameter, the method further includes:
performing image quality evaluation on the user area image to obtain an image quality evaluation value;
when the image quality evaluation value is larger than a preset threshold value, executing the step of identifying the user area image to obtain the target action parameter;
when the image quality evaluation value is smaller than or equal to the preset threshold value, determining a target image enhancement parameter corresponding to a target shooting parameter;
carrying out image enhancement processing on the user image according to the target image enhancement parameter;
the identifying the user area image to obtain the target action parameter includes:
identifying the user area image subjected to image enhancement processing to obtain the target action parameter;
wherein the performing image quality evaluation on the user area image to obtain an image quality evaluation value includes:
determining the information entropy of the user area image;
dividing the user area image into a plurality of areas;
determining a signal-to-noise ratio of each of the plurality of regions to obtain a plurality of signal-to-noise ratios;
determining a target mean square error of the plurality of signal-to-noise ratios;
determining a target fluctuation coefficient corresponding to the target mean square error according to a preset mapping relation between the mean square error and the fluctuation coefficient;
determining a reference information entropy according to the target fluctuation coefficient and the information entropy;
and determining the image quality evaluation value corresponding to the reference information entropy according to a mapping relation between preset information entropy and image quality evaluation value.
2. The method of claim 1, wherein the obtaining the user image of the user comprises:
detecting whether the user is in a preset area;
when the user is in the preset area, acquiring a target environment parameter;
determining reference shooting parameters corresponding to the target environment parameters according to a mapping relation between preset environment parameters and the shooting parameters;
determining a target action amplitude of the user;
determining a target fine tuning coefficient corresponding to the target action amplitude according to a mapping relation between a preset action amplitude and a fine tuning coefficient;
adjusting the reference shooting parameters according to the target fine adjustment coefficient to obtain target shooting parameters;
and shooting the user according to the target shooting parameters to obtain a user image of the user.
3. The method according to claim 1, wherein the recognizing the user area image to obtain the target motion parameter comprises:
carrying out feature extraction on the user area image to obtain a target feature set;
inputting the target feature set into a preset neural network model to obtain a target operation result;
and determining the target action parameters corresponding to the target operation result according to a preset mapping relation between the operation result and the action parameters.
4. The method according to any one of claims 1-3, wherein the target action parameters comprise a target action type and a target action similarity;
the determining of the target projection control parameter corresponding to the target motion parameter includes:
determining a target operation instruction corresponding to the target action type according to a mapping relation between a preset action type and the operation instruction;
acquiring a reference projection control parameter corresponding to the target operation instruction;
determining a target adjusting coefficient corresponding to the target action similarity according to a preset mapping relation between the action similarity and the adjusting coefficient;
and adjusting the reference projection control parameter according to the target adjustment coefficient to obtain the target projection control parameter.
5. The method according to any one of claims 1-3, wherein after the obtaining of the target motion parameters of the user and before the determining of the target projection control parameters corresponding to the target motion parameters, the method further comprises:
comparing the target action parameter with a preset action parameter;
and when the target action parameter is successfully compared with the preset action parameter, executing the step of determining the target projection control parameter corresponding to the target action parameter.
6. A projection control device, which is applied to an intelligent robot, the device comprising: an acquisition unit, a determination unit and a projection unit, wherein,
the acquiring unit is configured to acquire a target action parameter of a user, where the target action parameter includes: the motion type, the motion amplitude, the motion direction, the motion part and the motion similarity, wherein the motion type comprises at least one of the following components: running, shooting, boxing, bixin;
the determining unit is used for determining a target projection control parameter corresponding to the target action parameter;
the projection unit is used for realizing projection operation based on the target projection control parameter;
in the aspect of acquiring the target action parameter of the user, the acquiring unit is specifically configured to:
acquiring a user image of the user;
carrying out image segmentation on the user image to obtain a user area image;
identifying the user area image to obtain the target action parameter;
after the image segmentation is performed on the user image to obtain a user area image, and before the image segmentation is performed on the user area image to obtain the target motion parameter, the apparatus is further specifically configured to:
performing image quality evaluation on the user area image to obtain an image quality evaluation value;
when the image quality evaluation value is larger than a preset threshold value, executing the step of identifying the user area image to obtain the target action parameter;
when the image quality evaluation value is smaller than or equal to the preset threshold value, determining a target image enhancement parameter corresponding to a target shooting parameter;
carrying out image enhancement processing on the user image according to the target image enhancement parameter;
the identifying the user area image to obtain the target action parameter includes:
identifying the user area image subjected to image enhancement processing to obtain the target action parameter;
wherein the performing image quality evaluation on the user area image to obtain an image quality evaluation value includes:
determining the information entropy of the user area image;
dividing the user area image into a plurality of areas;
determining a signal-to-noise ratio of each of the plurality of regions to obtain a plurality of signal-to-noise ratios;
determining a target mean square error of the plurality of signal-to-noise ratios;
determining a target fluctuation coefficient corresponding to the target mean square error according to a preset mapping relation between the mean square error and the fluctuation coefficient;
determining a reference information entropy according to the target fluctuation coefficient and the information entropy;
and determining the image quality evaluation value corresponding to the reference information entropy according to a mapping relation between preset information entropy and image quality evaluation value.
7. An intelligent robot comprising a processor, a memory for storing one or more programs and configured for execution by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-5.
8. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any of the claims 1-5.
CN202011642861.1A 2020-12-30 2020-12-30 Projection control method, intelligent robot and related products Active CN112822471B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011642861.1A CN112822471B (en) 2020-12-30 2020-12-30 Projection control method, intelligent robot and related products

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011642861.1A CN112822471B (en) 2020-12-30 2020-12-30 Projection control method, intelligent robot and related products

Publications (2)

Publication Number Publication Date
CN112822471A CN112822471A (en) 2021-05-18
CN112822471B true CN112822471B (en) 2023-02-03

Family

ID=75856504

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011642861.1A Active CN112822471B (en) 2020-12-30 2020-12-30 Projection control method, intelligent robot and related products

Country Status (1)

Country Link
CN (1) CN112822471B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113867165A (en) * 2021-10-13 2021-12-31 达闼科技(北京)有限公司 Method and device for robot to optimize service of intelligent equipment and electronic equipment
CN114157846B (en) * 2021-11-11 2024-01-12 深圳市普渡科技有限公司 Robot, projection method, and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5872981B2 (en) * 2012-08-02 2016-03-01 オリンパス株式会社 Shooting equipment, moving body shooting method, shooting program
CN105446580B (en) * 2014-08-13 2019-02-05 联想(北京)有限公司 A kind of control method and portable electronic device
TW201636234A (en) * 2015-04-14 2016-10-16 鴻海精密工業股份有限公司 Control system and control method for vehicle
CN105573692A (en) * 2015-05-29 2016-05-11 宇龙计算机通信科技(深圳)有限公司 Projection control method, associated terminal and system
CN109683778B (en) * 2018-12-25 2021-06-15 努比亚技术有限公司 Flexible screen control method and device and computer readable storage medium
CN110275611B (en) * 2019-05-27 2023-02-17 联想(上海)信息技术有限公司 Parameter adjusting method and device and electronic equipment
CN110365957A (en) * 2019-07-05 2019-10-22 深圳市优必选科技股份有限公司 A kind of projecting method, projection arrangement and projection robot

Also Published As

Publication number Publication date
CN112822471A (en) 2021-05-18

Similar Documents

Publication Publication Date Title
US10325351B2 (en) Systems and methods for normalizing an image
US10769453B2 (en) Electronic device and method of controlling operation of vehicle
US20200265239A1 (en) Method and apparatus for processing video stream
US8792722B2 (en) Hand gesture detection
US9734404B2 (en) Motion stabilization and detection of articulated objects
CN112822471B (en) Projection control method, intelligent robot and related products
CN106648078B (en) Multi-mode interaction method and system applied to intelligent robot
US8923552B2 (en) Object detection apparatus and object detection method
JP2020149642A (en) Object tracking device and object tracking method
JP2021503139A (en) Image processing equipment, image processing method and image processing program
KR20230069892A (en) Method and apparatus for identifying object representing abnormal temperatures
CN111080665B (en) Image frame recognition method, device, equipment and computer storage medium
CN114332925A (en) Method, system and device for detecting pets in elevator and computer readable storage medium
US20240048672A1 (en) Adjustment of shutter value of surveillance camera via ai-based object recognition
US10990859B2 (en) Method and system to allow object detection in visual images by trainable classifiers utilizing a computer-readable storage medium and processing unit
CN115409991B (en) Target identification method and device, electronic equipment and storage medium
US20230005162A1 (en) Image processing system, image processing method, and storage medium
CN115298705A (en) License plate recognition method and device, electronic equipment and storage medium
CN111507142A (en) Facial expression image processing method and device and electronic equipment
CN111723614A (en) Traffic signal lamp identification method and device
KR102458896B1 (en) Method and device for segmentation map based vehicle license plate recognition
KR20230064959A (en) Surveillance Camera WDR(Wide Dynamic Range) Image Processing Using Object Detection Based on Artificial Intelligence
KR20210039237A (en) Electronic apparatus and method for controlling thereof
WO2023149295A1 (en) Information processing device, information processing method, and program
US20230206643A1 (en) Occlusion detection and object coordinate correction for estimating the position of an object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211029

Address after: 510663 501-2, Guangzheng science and Technology Industrial Park, No. 11, Nanyun fifth road, Science City, Huangpu District, Guangzhou, Guangdong Province

Applicant after: GUANGZHOU FUGANG LIFE INTELLIGENT TECHNOLOGY Co.,Ltd.

Address before: 510700 501-1, Guangzheng science and Technology Industrial Park, No. 11, Yunwu Road, Science City, Huangpu District, Guangzhou City, Guangdong Province

Applicant before: GUANGZHOU FUGANG WANJIA INTELLIGENT TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant