CN114364099B - Method for adjusting intelligent light equipment, robot and electronic equipment - Google Patents

Method for adjusting intelligent light equipment, robot and electronic equipment Download PDF

Info

Publication number
CN114364099B
CN114364099B CN202210039500.0A CN202210039500A CN114364099B CN 114364099 B CN114364099 B CN 114364099B CN 202210039500 A CN202210039500 A CN 202210039500A CN 114364099 B CN114364099 B CN 114364099B
Authority
CN
China
Prior art keywords
image
environment image
intelligent
light
equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210039500.0A
Other languages
Chinese (zh)
Other versions
CN114364099A (en
Inventor
高斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Shanghai Robotics Co Ltd
Original Assignee
Cloudminds Shanghai Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Shanghai Robotics Co Ltd filed Critical Cloudminds Shanghai Robotics Co Ltd
Priority to CN202210039500.0A priority Critical patent/CN114364099B/en
Publication of CN114364099A publication Critical patent/CN114364099A/en
Priority to PCT/CN2023/072077 priority patent/WO2023134743A1/en
Application granted granted Critical
Publication of CN114364099B publication Critical patent/CN114364099B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • G03B15/02Illuminating scene
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/105Controlling the light source in response to determined parameters
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/105Controlling the light source in response to determined parameters
    • H05B47/115Controlling the light source in response to determined parameters by determining the presence or movement of objects or living beings
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/165Controlling the light source following a pre-assigned programmed sequence; Logic control [LC]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Circuit Arrangement For Electric Light Sources In General (AREA)

Abstract

The disclosure provides a method for adjusting intelligent light equipment, a robot and electronic equipment, wherein the method comprises the following steps: classifying illumination modes generated by the intelligent lighting equipment through a convolutional neural network classifier; acquiring an environment image through a robot, and constructing a digital twin scene according to the environment image and the intelligent lighting equipment; inputting the environment image into the convolutional neural network classifier, and acquiring a plurality of illumination modes corresponding to the environment image; receiving a selection instruction of the illumination mode of the environment image; acquiring parameter settings of the intelligent light equipment corresponding to the illumination mode according to the selection instruction; and carrying out parameter adjustment on the intelligent lighting equipment in the digital twin scene according to the parameter setting. Through the method for adjusting the intelligent light equipment, the light equipment can be frequently adjusted without relying on manpower, a proper illumination model is selected with higher efficiency and lower cost, an expected lighting effect is achieved, a film-level light effect is achieved, and a proper picture effect is finally obtained.

Description

Method for adjusting intelligent light equipment, robot and electronic equipment
Technical Field
The disclosure relates to the field of intelligent control, and in particular relates to a method for adjusting intelligent light equipment, a robot and electronic equipment.
Background
With the continuous development of intelligent devices, the control of the intelligent devices is more and more intelligent. In reality, when people take a picture, the shooting light is difficult to meet the optimal shooting condition, or the light is too dark or too bright, or the light is not uniform enough, and the like.
At present, when shooting movies/short films, when meeting scene lamplight and not meeting shooting conditions, a lamplight operator and a photographer are often needed to cooperate, the positions of a light source/a reflecting plate/a light absorbing plate are manually moved, the requirements are repeatedly moved, according to different scenes, a person in charge directs the lamplight operator to adjust the position/the angle/the brightness of the lamplight, time and labor are wasted, and the lamplight devices generally only support independent control, but the influence of the independent lamplight devices on the lamplight effect of the environment is limited, and extremely poor user experience is caused.
Disclosure of Invention
Based on the above-mentioned problems, how to frequently adjust the lighting equipment without depending on a large amount of manpower, and select a proper illumination model with higher efficiency and lower cost so as to achieve the expected lighting effect, thereby realizing the lighting effect of the film level and finally obtaining a proper picture effect.
The invention aims to provide a method for adjusting intelligent light equipment, a robot and electronic equipment, which are used for intelligently selecting a corresponding illumination mode for a user, so that the light effect of an environment and a target object is better, the intelligent light equipment is suitable for shooting or identifying, the light effect of the intelligent light equipment is improved, and the problems are solved.
To achieve the above object, in a first aspect, an embodiment of the present invention provides a method for adjusting intelligent lighting apparatus, including:
classifying illumination modes generated by the intelligent lighting equipment through a convolutional neural network classifier;
acquiring an environment image through a robot, and constructing a digital twin scene according to the environment image and the intelligent lighting equipment;
inputting the environment image into the convolutional neural network classifier, and acquiring a plurality of illumination modes corresponding to the environment image;
receiving a selection instruction of the illumination mode of the environment image;
acquiring parameter settings of the intelligent light equipment corresponding to the illumination mode according to the selection instruction;
and carrying out parameter adjustment on the intelligent lighting equipment in the digital twin scene according to the parameter setting.
Further, the classifying, by the convolutional neural network classifier, the illumination mode generated by the intelligent lighting device includes:
Comparing the environment image with a standard image corresponding to the illumination mode of the classification mark in the convolutional neural network classifier;
calculating the parameter similarity of the environment image and the standard image through image parameters;
and marking the environment image with the parameter similarity larger than a first threshold as an illumination mode corresponding to the standard image, and recording the image ID of the marked environment image.
Further, the image ID of the environment image corresponds to different illumination modes according to different light parameters when the robot shoots.
Further, the acquiring the environmental image by the robot, and constructing a digital twin scene according to the environmental image and the intelligent lighting device includes:
shooting an environment image through a vision sensor of the robot;
identifying intelligent lighting equipment in the environment image;
establishing a mapping relation between the intelligent light equipment in the digital twin scene and the intelligent light equipment in the real scene;
and constructing a digital twin scene according to the mapping relation between the environment image and the intelligent lighting equipment.
Further, the inputting the environmental image into the convolutional neural network classifier and obtaining a plurality of illumination modes corresponding to the environmental image includes:
Inputting the environment image acquired by the robot into the convolutional neural network classifier;
identifying the environment image and acquiring an image ID corresponding to the environment image;
and acquiring a plurality of illumination modes corresponding to the environment image according to the image ID.
Further, before the step of receiving the instruction for selecting the illumination mode of the environmental image, the method further includes:
establishing a corresponding relation between the illumination mode and the lamplight parameters of the intelligent lamplight equipment;
and generating a control instruction of the intelligent lighting equipment according to the corresponding relation.
Further, the obtaining, according to the selection instruction, the parameter setting of the intelligent lighting device corresponding to the illumination mode includes:
acquiring an illumination mode selection instruction of a user through input or identification;
acquiring parameter settings corresponding to the illumination modes according to the mapping relation between the light parameters of the intelligent light equipment;
and acquiring an adjustable range of the parameter setting.
Further, the step of obtaining the illumination mode selection instruction of the user through input or identification includes:
acquiring an illumination mode selection instruction of a user through key input and/or touch screen input; or alternatively, the process may be performed,
And acquiring an illumination mode selection instruction of the user through voice recognition, gesture recognition, action recognition and/or expression recognition.
Further, the parameter adjustment for the intelligent lighting device includes:
and performing position adjustment, angle adjustment, height adjustment, light color adjustment, brightness adjustment, color temperature adjustment, light ratio adjustment and/or light flow adjustment on the intelligent light equipment.
Further, the method further comprises:
and mapping the parameter adjustment of the intelligent lighting equipment in the digital twin scene to the parameter adjustment of the intelligent lighting equipment in the real scene.
Further, the method further comprises:
identifying a target object in an environment image acquired by the robot;
and polishing and adjusting the light distribution of the target object according to the selected illumination mode.
In a second aspect, an embodiment of the present disclosure provides an apparatus for adjusting intelligent lighting equipment, including:
the classification module is used for classifying illumination modes generated by the intelligent lighting equipment through the convolutional neural network classifier;
the scene construction module is used for acquiring an environment image through a robot and constructing a digital twin scene according to the environment image and the intelligent lighting equipment;
The illumination mode acquisition module is used for inputting the environment image into the convolutional neural network classifier and acquiring a plurality of illumination modes corresponding to the environment image;
the instruction module is used for receiving a selection instruction of the illumination mode of the environment image;
the parameter acquisition module is used for acquiring parameter settings of the intelligent lighting equipment corresponding to the illumination mode according to the selection instruction;
and the adjusting module is used for carrying out parameter adjustment on the intelligent lighting equipment in the digital twin scene according to the parameter setting.
In a third aspect, embodiments of the present disclosure provide a robot, comprising:
at least one memory for storing computer-readable instructions; and
at least one processor configured to execute the computer-readable instructions to cause the robot to implement the method according to any one of the first aspects above.
In a fourth aspect, an embodiment of the present disclosure provides an electronic device, including:
at least one memory for storing computer-readable instructions; and
at least one processor configured to execute the computer-readable instructions to cause the electronic device to implement the method of any one of the first aspects above.
In a fifth aspect, embodiments of the present disclosure provide a non-transitory computer-readable storage medium storing computer-readable instructions which, when executed by a computer, cause the computer to implement the method of any one of the first aspects.
The embodiment of the disclosure discloses a method for adjusting intelligent lighting equipment, a robot and electronic equipment, wherein the method comprises the following steps: classifying illumination modes generated by the intelligent lighting equipment through a convolutional neural network classifier; acquiring an environment image through a robot, and constructing a digital twin scene according to the environment image and the intelligent lighting equipment; inputting the environment image into the convolutional neural network classifier, and acquiring a plurality of illumination modes corresponding to the environment image; receiving a selection instruction of the illumination mode of the environment image; acquiring parameter settings of the intelligent light equipment corresponding to the illumination mode according to the selection instruction; and carrying out parameter adjustment on the intelligent lighting equipment in the digital twin scene according to the parameter setting. Through the method for adjusting the lamplight of the intelligent lamplight equipment, the lamplight equipment can be frequently adjusted without relying on manpower, a proper illumination model can be selected with higher efficiency and lower cost, an expected lighting effect can be achieved, a movie-level lamplight effect can be achieved, and a proper picture effect can be finally obtained. Meanwhile, when the robot performs target recognition, how to polish the object, so that the robot obtains more accurate recognition rate and better shooting effect.
The invention aims to provide a method for adjusting intelligent light equipment, a robot and electronic equipment, which are used for intelligently selecting corresponding illumination modes for users, so that the light effect of an environment and a target object is better, the intelligent light equipment is suitable for shooting or identifying, and the light effect of the intelligent light equipment is improved
The foregoing description is only an overview of the disclosed technology, and may be implemented in accordance with the disclosure of the present disclosure, so that the above-mentioned and other objects, features and advantages of the present disclosure can be more clearly understood, and the following detailed description of the preferred embodiments is given with reference to the accompanying drawings.
Drawings
Fig. 1 is a schematic flow chart of a method for adjusting light of an intelligent lighting device according to an embodiment of the disclosure;
FIG. 2 is a schematic diagram of a convolutional neural network classifier model provided in an embodiment of the present disclosure;
fig. 3 is a schematic diagram of an apparatus for adjusting light of an intelligent lighting device according to another embodiment of the disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to another embodiment of the present disclosure.
Detailed Description
In order that the technical contents of the present disclosure may be more clearly described, further description is made below in connection with specific embodiments.
The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The embodiments disclosed are described in detail below with reference to the accompanying drawings.
Based on the technical scheme of the embodiment of the disclosure, the following technical problems are solved:
how to frequently adjust the lighting equipment without depending on a large amount of manpower, and select a proper illumination model with higher efficiency and lower cost so as to achieve the expected lighting effect, thereby realizing the lighting of different face shapes corresponding to different numbers & positions & brightness, different tone colors & different moods & different voices & different limb actions & different depth of field levels of people and objects, realizing the lighting effect of the film level, and finally obtaining the proper picture effect.
Meanwhile, how to shine the target object when the robot recognizes the target, so that the robot obtains a more accurate recognition rate?
Fig. 1 is a flow chart of a method for adjusting lighting of an intelligent device according to an embodiment of the present disclosure, where the method provided by the embodiment may be performed by an electronic device or a robot and a control device thereof, and the device may be implemented as software or a combination of software and hardware, and the device may be integrally provided in a certain device in a control system, such as a terminal device. As shown in fig. 1, the method comprises the steps of:
step S101: and classifying the illumination modes generated by the intelligent lighting equipment through the convolutional neural network classifier.
In step S101, in the embodiment of the present disclosure, the intelligent device in the home, office or public place is networked and connected to the server through a network cable, wifi or bluetooth. The server may obtain various data of the intelligent device, including the state and location of the device, as well as the service mode of the intelligent device, various adjustable parameters, etc., through a network or intelligent connection. The server can acquire GPS positioning information of the robot through a network and acquire the position of the intelligent device. Each intelligent lighting equipment in robot and the environment is connected through the network, and intelligent lighting equipment includes not limited to: the intelligent television, the intelligent lamp, the intelligent LED lamp, the intelligent ceiling lamp, the intelligent night-light, the intelligent bedside lamp, the intelligent desk lamp, the intelligent hanging lamp, the mobile phone, the intelligent sound box and the like are examples, and the intelligent light equipment is not limited to the examples. For example, the robot may be a wheel simulation robot, a sweeping robot, an intelligent sound box, an automatic vending machine, various intelligent interaction devices, etc., or may be an unmanned plane, an intelligent automobile, a balance car, etc. Optionally, the intelligent light equipment is configured with Wi-Fi/Bluetooth MESH/UVW/ultrasonic/infrared sensors and communication technologies, so that communication with the robot/intelligent light equipment can be realized. The intelligent lighting apparatus further includes: smart shade and/or reflector devices, including, but not limited to: intelligent shade cloth, intelligent shade plate, etc. The intelligent lighting device comprises the following components: the fixed position is not rotatable and simultaneously movable, the fixed position can rotate by itself, the movable position can be adjusted in height up and down and simultaneously rotated in left and right directions, and the like. The intelligent lighting device parameter settings include, but are not limited to: adjusting the position of the device, adjusting the angle of the device, adjusting the height of the device, adjusting the color of light, adjusting the brightness, adjusting the color temperature, flowing light and the like.
Specifically, the classifying, by the convolutional neural network classifier, the illumination mode generated by the intelligent lighting device includes: comparing the environment image with a standard image corresponding to the illumination mode of the classification mark in the convolutional neural network classifier; calculating the parameter similarity of the environment image and the standard image through image parameters; and marking the environment image with the parameter similarity larger than a first threshold as an illumination mode corresponding to the standard image, and recording the image ID of the marked environment image. The image ID of the environment image corresponds to different illumination modes according to different lamplight parameters when the robot shoots.
Referring to fig. 2, a schematic diagram of a convolutional neural network classifier model provided in an embodiment of the present disclosure is shown, where in the convolutional neural network classifier (model), each environment image corresponds to an image ID, such as an image ID1, an image ID2, …, and an image Idn in the figure, and each image ID has multiple illumination modes, such as an illumination mode 1, an illumination mode 2, … …, and an illumination mode N in the figure, and at the same time, different image IDs may all have the same illumination mode, such as an illumination mode with an "lenburg light" effect for both an image ID1 and an image ID 3. Each illumination mode corresponds to the device parameter setting of each intelligent lighting device, such as illumination mode 1 corresponds to device parameter setting 1, illumination mode 2 corresponds to device parameter setting 2 and … …, and illumination mode N corresponds to device parameter setting N in the figure. In the convolutional neural network classifier (model), a picture ID corresponding to an illumination mode is used as a reference picture, the reference picture is compared with an environment image, a reference image with the corresponding similarity higher than a certain threshold value is selected, the illumination mode is selected according to the reference image, and the final adjustment target of the intelligent lighting equipment is set according to equipment parameters corresponding to the illumination mode. In the training model, each intelligent lighting device can be respectively lighted to take a picture, and then different synthetic graphs are respectively synthesized by permutation and combination, so that the device parameter setting of the synthetic graph is recorded. Wherein the composite view achieves the picture ID corresponding to the corresponding reference picture of the mode effect.
In this embodiment, the convolutional neural network classifier classifies by a deep learning algorithm model, which includes: the light source color brightness, the light source position, the light source brightness/color, the corresponding description keyword and the usage scene obtained by the shot object, wherein different people correspond to different scenes, the lighting position and the light source brightness/color, for example: when the target person A is in a special scene, the light receiving surface uses 1 main light source red 2000 lumens, the light receiving surface uses 1 auxiliary light source blue 500 lumens, and annular light distribution is carried out, so that dramatic effects can be obtained; when the target person B half body looks like the scene, 1 main light source and 1 shading plate are used for carrying out Ronbolor light, and the contrast effect can be obtained.
In an embodiment of the present disclosure, a lighting model includes: flat/face light mode, pergola (butterfly) mode, ring light mode, lenburg light mode, image phase Lian Buguang mode, split light (one-sided light shadow) mode, broad light mode, thin light mode, and the like.
Specific:
a) Flat/face light mode: the light source is arranged beside the camera and faces the photographed object. This mode appears flat and does not make the shot look deeper, but can show a clean and simple picture.
b) A pergola (butterfly) mode: the light source is arranged above the camera, so that the light source can look down from a high place to a photographed person. When the light source is knocked down from above, shadows are formed under the nose, appearing to be butterfly-shaped. The light distribution mode makes the shot person look like a model in the lens, and the shadow of cheeks and chin can be manufactured, so that cheeks and cheeks are more prominently formed, the face hole looks thinner and the chin is more pointed, and the charm of the shot person can be improved.
c) Annular light distribution mode: based on the Pila Meng Guang, the light source is slightly above eye level and 30-40 degrees above the camera (depending on the individual face situation), still allowing light to strike the face of the person being photographed from above. It will shadow slightly below the neck of the person's face, the projection of the nose will not be connected to the cheek shadow, but will be directed slightly downwards, and the light source will not be so high that the catch light will be lost, which may create more dramatic effects.
d) Lunbourn british mode: the light source is placed at a high position, 45-60 degrees of the photographed person, and forms a small triangle on one side of the face of the photographed person. This can give a highly contrasting picture, and this angle can be used to express that the subject is experiencing the darkest period in life. If the Ronbolor light is too dark for you, a reflector can be added to weaken the shadow. Unlike annular light distribution, the shadows of the nose and cheeks are continuous, but more importantly, the eyes on the shadow side still have catch light to maintain a bright and attractive appearance, and the photo also has dramatic sensations.
e) Image connection mode: the subject is slightly turned away from the light source during shooting, and the light source is also required to be positioned higher than the head, so that the nose shadow is connected with the cheek shadow. However, not all people are suitable for the light distribution mode, people with large cheekbones are ideal, and people with insufficient nose bridge are difficult to distribute light.
f) Split light (one-sided light shadow) mode: the light source is placed at 90 degrees to the left or right of the subject, slightly before or after it is moved to shift the different profiles. The light distribution needs to be changed along with the face of the object, and the light should also follow when the head turns. In this mode, the holes on the lower surface are divided into two parts, and the two parts are bright and dark, so that stronger drama feeling is produced, and the pattern is suitable for people with strong individuality or gas quality, such as artists, musicians and the like, and the yang-rigid taste is heavier. But also to represent a secret hidden by the photographer or an unknowingly dull surface.
g) And (3) a wide light display mode: this is not a specific light setting, but a style, whether split, ring or lenburg, can be used. The method is simple, namely, the light-receiving side is turned to the lens, so that the light-receiving side is wider, and then the whole face is wider, so that the method is suitable for people with thin-shaped faces.
h) Lean light mode: the darker side, as opposed to the broader light, is directed toward the lens so that the face appears to be pointed a little and more stereoscopic and ambient.
Step S102: and acquiring an environment image through a robot, and constructing a digital twin scene according to the environment image and the intelligent lighting equipment.
In step S102, in the present disclosure, a vision sensor, such as an image camera, a depth camera, a laser radar, and/or an ultrasonic wave, etc., is disposed on the robot, where the image camera is used to take a picture or camera, and collect an environmental image or a target image that the robot wants to collect in real time. The robot is connected with each intelligent lighting device in the environment through a network. For example, the robot may be a wheel simulation robot, a sweeping robot, an intelligent sound box, an automatic vending machine, various intelligent interaction devices, etc., or may be an unmanned plane, an intelligent automobile, a balance car, etc. Through the vision sensor of the robot, an environment image of the environment where the robot is located can be acquired, and meanwhile, a target object in the image can be identified.
Specifically, the acquiring the environmental image by the robot, and constructing a digital twin scene according to the environmental image and the intelligent lighting equipment includes: shooting an environment image through a vision sensor of the robot; identifying intelligent lighting equipment in the environment image; establishing a mapping relation between the intelligent light equipment in the digital twin scene and the intelligent light equipment in the real scene; and constructing a digital twin scene according to the mapping relation between the environment image and the intelligent lighting equipment.
Step S103: inputting the environment image into the convolutional neural network classifier, and acquiring a plurality of illumination modes corresponding to the environment image.
In step S103, in the embodiment of the present disclosure, a section of non-recorded movie/in-camera shot is input in the deep learning algorithm model to detect, and an illumination mode is output, where the mode includes: different people correspond to different scenes, the position of the lighting and the brightness and color of the light source, and the corresponding pattern description keywords, such as the albenda pattern.
Specifically, the inputting the environmental image into the convolutional neural network classifier and obtaining a plurality of illumination modes corresponding to the environmental image includes: inputting the environment image acquired by the robot into the convolutional neural network classifier; identifying the environment image and acquiring an image ID corresponding to the environment image; and acquiring a plurality of illumination modes corresponding to the environment image according to the image ID.
Step S104: and receiving a selection instruction of the illumination mode of the environment image.
In step S104, in the embodiment of the present disclosure, the robot configures a smart camera, a laser radar, and/or a depth camera, and obtains an area and a 3D shape of each target object or person in the scene through the laser radar and the depth camera. The illumination mode selection instruction can be input or identified to obtain the illumination mode selection instruction of the user, and specifically comprises the following steps: acquiring an illumination mode selection instruction of a user through key input and/or touch screen input; or, acquiring the illumination mode selection instruction of the user through voice recognition, gesture recognition, action recognition and/or expression recognition.
Specific:
1. automatic control mode one: adjusting the light by identifying the face shape or emotion through the image:
a) The position of the camera is provided with a depth camera and a laser radar, and the facial expression of the five sense organs, the limb movement and the gesture change are captured in real time;
b) Inputting a deep learning model;
c) And adjusting the change of the lamplight according to the deep learning model (the change range is configurable).
2. Automatic control mode II: by timbre, or emotion in speech:
a) At least one microphone for acquiring the voice of the person in real time;
b) A deep-learned model is input, including emotions in the corresponding utterances to adjust the display of different lights.
3. Manual control mode one: digital twin
a) Acquiring the position of surrounding intelligent equipment through a camera and/or a depth camera in front of a target person or object, and mapping the position into a digital twin scene;
b) Or the camera and the intelligent equipment sense the position of the other party through UWB/ultrasonic waves and map the position to a digital twin scene;
c) In the digital twin three-dimensional space, the positions of the robot/intelligent equipment/person can be seen, the intelligent equipment is adjusted by clicking and dragging on a screen, and the positions of the light source and the light shielding plate of the intelligent equipment can be adjusted to control light rays; or clicking a light area around the shot object in the screen to perform linkage adjustment on the brightness and the positions of a plurality of lights.
4. And a manual control mode II: internet of things
a) The intelligent curtain is pulled up through screen media such as voice/screen, natural light windows are blocked by the light shielding plate, and spotlight beam light is utilized.
5. And a manual control mode II: speech input
a) In the deep learning model, the light effect corresponds to the expression of various voices.
b) Voice input, i want the effect of "lunbought", i want the effect of "cool", i want the effect of "apple release meeting".
6. Key input control:
by setting an illumination mode key on the robot or the terminal device, a user can input and select through the key to generate a selection instruction.
7. Touch screen input control:
by setting an illumination mode touch screen key or area on the robot or the terminal equipment, a user can perform touch screen selection through the touch screen key or area, and a selection instruction is generated.
Step S105: and acquiring parameter settings of the intelligent light equipment corresponding to the illumination mode according to the selection instruction.
In step S105, in the embodiment of the present disclosure, each lighting mode corresponds to a parameter setting of at least one intelligent lighting apparatus.
Specifically, the obtaining, according to the selection instruction, the parameter setting of the intelligent lighting device corresponding to the illumination mode includes: acquiring an illumination mode selection instruction of a user through input or identification; acquiring parameter settings corresponding to the illumination modes according to the mapping relation between the light parameters of the intelligent light equipment; and acquiring an adjustable range of the parameter setting.
In the embodiment of the disclosure, the camera used for shooting is an adjustable camera, and parameters include, but are not limited to: ISO, shift, EV, aperture, focal length, inside light illumination, light source device on the camera, camera position, illumination lumen adjustable.
In embodiments of the present disclosure, adjustable smart light source parameter adjustments include, but are not limited to: the on/off/projection range/angle/brightness and color of the light source, in the digital twin world, the intelligent light source is turned on to obtain lumens, the lumens are directed to the object to obtain the illuminated (or affected area) area, the light receiving degree of the surface of the object is obtained, the light receiving degree is expressed by the light flux received by the unit area, E represents illuminance, S represents area, F represents light flux, namely E (illuminance) =F (lumens)/S (square meter)
Optionally, the material/reflectivity and physical properties of the adjustable object may be recorded and selected.
According to the environment image, in the real world, there is an intelligent photometer, and the illuminance value is obtained at the subject position toward the light source. The brightness of other lamps is regulated according to the light ratio value, and corresponding parameters are also regulated in the virtual scene, so that the light receiving area corresponding to the bright-dark ladder projected onto the object by the light source can be simulated.
Step S106: and carrying out parameter adjustment on the intelligent lighting equipment in the digital twin scene according to the parameter setting.
In step S106, in the embodiment of the present disclosure, parameter adjustment for the intelligent lighting apparatus includes, but is not limited to: and performing position adjustment, angle adjustment, height adjustment, light color adjustment, brightness adjustment, color temperature adjustment, light ratio adjustment and/or light flow adjustment on the intelligent light equipment.
The light ratio is controlled to determine the brightness and contrast of the picture, so that different shadow colors and different modeling effects and artistic atmospheres are formed. The light ratio measurement includes, but is not limited to:
a) In the same scene, the ratio of illumination values between different light sources or the ratio of the brightness values of the light receiving part and shadow and projection part of the surface of the same reflective object
b) The ratio of brightness to darkness values between the surfaces with different reflectivities of adjacent parts in the scenery under the irradiation of the same light source, such as characters and background, faces and clothes in the scenery, and the like.
c) The ratio of the luminance value or luminance value between the highest luminance and lowest luminance portions of the scene.
Specifically, the parameter adjustment for the intelligent lighting device includes: and performing position adjustment, angle adjustment, height adjustment, light color adjustment, brightness adjustment, color temperature adjustment, light ratio adjustment and/or light flow adjustment on the intelligent light equipment.
In addition, before the step of receiving the instruction for selecting the illumination mode of the environmental image, the method further includes: establishing a corresponding relation between the illumination mode and the lamplight parameters of the intelligent lamplight equipment; and generating a control instruction of the intelligent lighting equipment according to the corresponding relation.
In addition, the method for adjusting the intelligent light equipment further comprises the following steps: and mapping the parameter adjustment of the intelligent lighting equipment in the digital twin scene to the parameter adjustment of the intelligent lighting equipment in the real scene.
In addition, the method for adjusting the intelligent light equipment further comprises the following steps: identifying a target object in an environment image acquired by the robot; and polishing and adjusting the light distribution of the target object according to the selected illumination mode.
Fig. 3 is a schematic diagram of an apparatus for adjusting intelligent lighting equipment according to another embodiment of the disclosure. The device for the robot to interact with the intelligent device comprises: a classification module 301, a scene construction module 302, an illumination pattern acquisition module 303, an instruction module 304, a parameter acquisition module 305 and an adjustment module 306. Wherein:
the classifying module 301 is configured to classify, by using a convolutional neural network classifier, an illumination mode generated by the intelligent lighting device.
Specifically, the classification module is specifically configured to: comparing the environment image with a standard image corresponding to the illumination mode of the classification mark in the convolutional neural network classifier; calculating the parameter similarity of the environment image and the standard image through image parameters; and marking the environment image with the parameter similarity larger than a first threshold as an illumination mode corresponding to the standard image, and recording the image ID of the marked environment image. The image ID of the environment image corresponds to different illumination modes according to different lamplight parameters when the robot shoots.
The scene construction module 302 is configured to acquire an environmental image through a robot, and construct a digital twin scene according to the environmental image and the intelligent lighting device.
In the embodiment of the disclosure, a visual sensor, such as an image camera, a depth camera, a laser radar and/or ultrasonic wave, and the like, is arranged on the robot, wherein the image camera is used for photographing or shooting, and is used for collecting an environment image or a target image which the robot wants to collect in real time. The robot is connected with each intelligent lighting device in the environment through a network. For example, the robot may be a wheel simulation robot, a sweeping robot, an intelligent sound box, an automatic vending machine, various intelligent interaction devices, etc., or may be an unmanned plane, an intelligent automobile, a balance car, etc. Through the vision sensor of the robot, an environment image of the environment where the robot is located can be acquired, and meanwhile, a target object in the image can be identified.
The scene construction module is specifically configured to include: shooting an environment image through a vision sensor of the robot; identifying intelligent lighting equipment in the environment image; establishing a mapping relation between the intelligent light equipment in the digital twin scene and the intelligent light equipment in the real scene; and constructing a digital twin scene according to the mapping relation between the environment image and the intelligent lighting equipment.
The illumination mode obtaining module 303 is configured to input the environmental image into the convolutional neural network classifier, and obtain a plurality of illumination modes corresponding to the environmental image.
In the embodiment of the disclosure, a section of non-recorded movie/camera shot is input into a deep learning algorithm model for detection, and a lighting mode is output, wherein the mode comprises: different people correspond to different scenes, the position of the lighting and the brightness and color of the light source, and the corresponding pattern description keywords, such as the albenda pattern.
Specifically, the illumination mode acquisition module is specifically configured to: inputting the environment image acquired by the robot into the convolutional neural network classifier; identifying the environment image and acquiring an image ID corresponding to the environment image; and acquiring a plurality of illumination modes corresponding to the environment image according to the image ID.
The instruction module 304 is configured to receive a selection instruction of an illumination mode of the environmental image.
In the instruction module, the selection instruction is obtained through input or identification, and specifically includes: acquiring an illumination mode selection instruction of a user through key input and/or touch screen input; or, acquiring the illumination mode selection instruction of the user through voice recognition, gesture recognition, action recognition and/or expression recognition.
The parameter obtaining module 305 is configured to obtain parameter settings of the intelligent lighting device corresponding to the illumination mode according to the selection instruction.
Specifically, the parameter obtaining module is specifically configured to: acquiring an illumination mode selection instruction of a user through input or identification; acquiring parameter settings corresponding to the illumination modes according to the mapping relation between the light parameters of the intelligent light equipment; and acquiring an adjustable range of the parameter setting.
The adjusting module 306 is configured to perform parameter adjustment on the intelligent lighting device in the digital twin scene according to the parameter setting.
Specifically, the adjusted module is specifically configured to: and performing position adjustment, angle adjustment, height adjustment, light color adjustment, brightness adjustment, color temperature adjustment, light ratio adjustment and/or light flow adjustment on the intelligent light equipment.
Furthermore, the device further comprises:
and the mapping module is used for mapping the parameter adjustment of the intelligent lighting equipment in the digital twin scene to the parameter adjustment of the intelligent lighting equipment in the real scene.
Furthermore, the device further comprises:
the relation establishing module is used for establishing a corresponding relation between the illumination mode and the lamplight parameters of the intelligent lamplight equipment;
and the instruction generation module is used for generating a control instruction of the intelligent lighting equipment according to the corresponding relation.
The apparatus further comprises:
the identification module is used for identifying a target object in the environment image acquired by the robot;
and the target light ray adjusting module is used for polishing and adjusting the light distribution of the target object according to the selected illumination mode.
The apparatus shown in fig. 3 may perform the method of the embodiment shown in fig. 1, and reference is made to the relevant description of the embodiment shown in fig. 1 for parts of this embodiment not described in detail. The implementation process and the technical effect of this technical solution refer to the description in the embodiment shown in fig. 1, and are not repeated here.
Referring now to fig. 4, a schematic diagram of an electronic device 400 suitable for use in implementing another embodiment of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 4 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 4, the electronic device 400 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 401, which may perform various suitable actions and processes according to a program stored in a Read Only Memory (ROM) 402 or a program loaded from a storage means 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic device 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other by a communication line 404. An input/output (I/O) interface 405 is also connected to the communication line 404.
In general, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, magnetic tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate with other devices wirelessly or by wire to exchange data. While fig. 4 shows an electronic device 400 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via communications device 409, or from storage 408, or from ROM 402. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 401.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: the interaction method in the above embodiment is performed.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any one of the methods of the first aspect.
According to one or more embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium, characterized in that the non-transitory computer-readable storage medium stores computer instructions for causing a computer to perform any of the methods of the foregoing first aspect.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (14)

1. A method of adjusting intelligent lighting apparatus, comprising:
classifying illumination modes generated by the intelligent lighting equipment through a convolutional neural network classifier;
acquiring an environment image through a robot, and constructing a digital twin scene according to the environment image and the intelligent lighting equipment;
inputting the environment image into the convolutional neural network classifier, and acquiring a plurality of illumination modes corresponding to the environment image;
receiving a selection instruction of the illumination mode of the environment image;
acquiring parameter settings of the intelligent light equipment corresponding to the illumination mode according to the selection instruction;
And carrying out parameter adjustment on the intelligent lighting equipment in the digital twin scene according to the parameter setting.
2. The method of claim 1, wherein classifying, by the convolutional neural network classifier, illumination patterns generated by intelligent lighting devices, comprises:
comparing the environment image with a standard image corresponding to the illumination mode of the classification mark in the convolutional neural network classifier;
calculating the parameter similarity of the environment image and the standard image through image parameters;
and marking the environment image with the parameter similarity larger than a first threshold as an illumination mode corresponding to the standard image, and recording the image ID of the marked environment image.
3. The method according to claim 2, wherein the image ID of the environment image corresponds to different illumination modes according to different light parameters at the time of the robot photographing.
4. The method of claim 1, wherein the acquiring the environmental image by the robot and constructing the digital twin scene with the intelligent lighting apparatus according to the environmental image comprises:
shooting an environment image through a vision sensor of the robot;
Identifying intelligent lighting equipment in the environment image;
establishing a mapping relation between the intelligent light equipment in the digital twin scene and the intelligent light equipment in the real scene;
and constructing a digital twin scene according to the mapping relation between the environment image and the intelligent lighting equipment.
5. The method of any one of claims 1 to 4, wherein inputting the environmental image into the convolutional neural network classifier and obtaining a plurality of illumination patterns corresponding to the environmental image comprises:
inputting the environment image acquired by the robot into the convolutional neural network classifier;
identifying the environment image and acquiring an image ID corresponding to the environment image;
and acquiring a plurality of illumination modes corresponding to the environment image according to the image ID.
6. The method of claim 1, wherein prior to the step of receiving the instruction to select the illumination pattern of the ambient image, the method further comprises:
establishing a corresponding relation between the illumination mode and the lamplight parameters of the intelligent lamplight equipment;
and generating a control instruction of the intelligent lighting equipment according to the corresponding relation.
7. The method according to claim 1, wherein the obtaining, according to the selection instruction, the parameter setting of the intelligent lighting apparatus corresponding to the illumination mode includes:
Acquiring an illumination mode selection instruction of a user through input or identification;
acquiring parameter settings corresponding to the illumination modes according to the mapping relation between the light parameters of the intelligent light equipment;
and acquiring an adjustable range of the parameter setting.
8. The method of claim 7, wherein the obtaining the illumination mode selection instruction of the user through input or recognition comprises:
acquiring an illumination mode selection instruction of a user through key input and/or touch screen input; or alternatively, the process may be performed,
and acquiring an illumination mode selection instruction of the user through voice recognition, gesture recognition, action recognition and/or expression recognition.
9. The method of claim 1, wherein said performing parameter adjustment on said intelligent lighting apparatus comprises:
and performing position adjustment, angle adjustment, height adjustment, light color adjustment, brightness adjustment, color temperature adjustment, light ratio adjustment and/or light flow adjustment on the intelligent light equipment.
10. The method according to claim 1, wherein the method further comprises:
and mapping the parameter adjustment of the intelligent lighting equipment in the digital twin scene to the parameter adjustment of the intelligent lighting equipment in the real scene.
11. The method according to claim 1, wherein the method further comprises:
identifying a target object in an environment image acquired by the robot;
and polishing and adjusting the light distribution of the target object according to the selected illumination mode.
12. An apparatus for adjusting intelligent lighting equipment, comprising:
the classification module is used for classifying illumination modes generated by the intelligent lighting equipment through the convolutional neural network classifier;
the scene construction module is used for acquiring an environment image through a robot and constructing a digital twin scene according to the environment image and the intelligent lighting equipment;
the illumination mode acquisition module is used for inputting the environment image into the convolutional neural network classifier and acquiring a plurality of illumination modes corresponding to the environment image;
the instruction module is used for receiving a selection instruction of the illumination mode of the environment image;
the parameter acquisition module is used for acquiring parameter settings of the intelligent lighting equipment corresponding to the illumination mode according to the selection instruction;
and the adjusting module is used for carrying out parameter adjustment on the intelligent lighting equipment in the digital twin scene according to the parameter setting.
13. A robot, comprising:
at least one memory for storing computer-readable instructions; and
at least one processor configured to execute the computer-readable instructions to cause the robot to implement the method of any one of claims 1-11.
14. An electronic device, comprising:
at least one memory for storing computer-readable instructions; and
at least one processor configured to execute the computer-readable instructions to cause the electronic device to implement the method of any one of claims 1-11.
CN202210039500.0A 2022-01-13 2022-01-13 Method for adjusting intelligent light equipment, robot and electronic equipment Active CN114364099B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210039500.0A CN114364099B (en) 2022-01-13 2022-01-13 Method for adjusting intelligent light equipment, robot and electronic equipment
PCT/CN2023/072077 WO2023134743A1 (en) 2022-01-13 2023-01-13 Method for adjusting intelligent lamplight device, and robot, electronic device, storage medium and computer program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210039500.0A CN114364099B (en) 2022-01-13 2022-01-13 Method for adjusting intelligent light equipment, robot and electronic equipment

Publications (2)

Publication Number Publication Date
CN114364099A CN114364099A (en) 2022-04-15
CN114364099B true CN114364099B (en) 2023-07-18

Family

ID=81109508

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210039500.0A Active CN114364099B (en) 2022-01-13 2022-01-13 Method for adjusting intelligent light equipment, robot and electronic equipment

Country Status (2)

Country Link
CN (1) CN114364099B (en)
WO (1) WO2023134743A1 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114364099B (en) * 2022-01-13 2023-07-18 达闼机器人股份有限公司 Method for adjusting intelligent light equipment, robot and electronic equipment
CN114913310B (en) * 2022-06-10 2023-04-07 广州澄源电子科技有限公司 LED virtual scene light control method
CN116073446B (en) * 2023-03-07 2023-06-02 天津天元海科技开发有限公司 Intelligent power supply method and device based on lighthouse multi-energy environment integrated power supply system
CN117042253A (en) * 2023-07-11 2023-11-10 昆山恩都照明有限公司 Intelligent LED lamp, control system and method
CN117202430B (en) * 2023-09-20 2024-03-19 浙江炯达能源科技有限公司 Energy-saving control method and system for intelligent lamp post
CN116963357B (en) * 2023-09-20 2023-12-01 深圳市靓科光电有限公司 Intelligent configuration control method, system and medium for lamp
CN117241445B (en) * 2023-11-10 2024-02-02 深圳市卡能光电科技有限公司 Intelligent debugging method and system for self-adaptive scene of combined atmosphere lamp
CN117952981B (en) * 2024-03-27 2024-06-21 常州星宇车灯股份有限公司 Intelligent indoor lamp detection device and method based on CNN convolutional neural network
CN118042689B (en) * 2024-04-02 2024-06-11 深圳市华电照明有限公司 Light control method and system for optical image recognition

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205693941U (en) * 2016-06-17 2016-11-16 合肥三川自控工程有限责任公司 Multifunctional fire-fighting emergency lighting and evacuation light fixture and intelligence control system
CN108805919A (en) * 2018-05-23 2018-11-13 Oppo广东移动通信有限公司 Light efficiency processing method, device, terminal and computer readable storage medium
WO2018219294A1 (en) * 2017-06-02 2018-12-06 广东野光源眼科技有限公司 Information terminal
WO2019090503A1 (en) * 2017-11-08 2019-05-16 深圳传音通讯有限公司 Image capturing method and image capturing system for intelligent terminal
CN110248450A (en) * 2019-04-30 2019-09-17 广州富港万嘉智能科技有限公司 A kind of combination personage carries out the method and device of signal light control
WO2020056768A1 (en) * 2018-09-21 2020-03-26 Nokia Shanghai Bell Co., Ltd. Mirror
CN111182233A (en) * 2020-01-03 2020-05-19 宁波方太厨具有限公司 Control method and system for automatic light supplement of shooting space
CN111586941A (en) * 2020-04-24 2020-08-25 苏州华普物联科技有限公司 Intelligent illumination control method based on neural network algorithm
IT201900011304A1 (en) * 2019-07-10 2021-01-10 Rebernig Supervisioni Srl Adaptive Lighting Control Method and Adaptive Lighting System
WO2021098191A1 (en) * 2019-11-21 2021-05-27 天津九安医疗电子股份有限公司 Method for automatically adjusting illumination level of target scene, and intelligent illumination control system
CN113711578A (en) * 2019-04-22 2021-11-26 恒久礼品股份有限公司 Intelligent toilet mirror loudspeaker system
CN113900384A (en) * 2021-10-13 2022-01-07 达闼科技(北京)有限公司 Method and device for interaction between robot and intelligent equipment and electronic equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2666770C2 (en) * 2011-12-14 2018-09-12 Филипс Лайтинг Холдинг Б.В. Lighting control device
US11232502B2 (en) * 2017-12-20 2022-01-25 Signify Holding B.V. Lighting and internet of things design using augmented reality
US11985748B2 (en) * 2020-06-04 2024-05-14 Signify Holding B.V. Method of configuring a plurality of parameters of a lighting device
CN112492224A (en) * 2020-11-16 2021-03-12 广州博冠智能科技有限公司 Adaptive scene light supplement method and device for video camera
CN113824884B (en) * 2021-10-20 2023-08-08 深圳市睿联技术股份有限公司 Shooting method and device, shooting equipment and computer readable storage medium
CN114364099B (en) * 2022-01-13 2023-07-18 达闼机器人股份有限公司 Method for adjusting intelligent light equipment, robot and electronic equipment

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205693941U (en) * 2016-06-17 2016-11-16 合肥三川自控工程有限责任公司 Multifunctional fire-fighting emergency lighting and evacuation light fixture and intelligence control system
WO2018219294A1 (en) * 2017-06-02 2018-12-06 广东野光源眼科技有限公司 Information terminal
WO2019090503A1 (en) * 2017-11-08 2019-05-16 深圳传音通讯有限公司 Image capturing method and image capturing system for intelligent terminal
CN111316633A (en) * 2017-11-08 2020-06-19 深圳传音通讯有限公司 Image shooting method and image shooting system of intelligent terminal
CN108805919A (en) * 2018-05-23 2018-11-13 Oppo广东移动通信有限公司 Light efficiency processing method, device, terminal and computer readable storage medium
WO2020056768A1 (en) * 2018-09-21 2020-03-26 Nokia Shanghai Bell Co., Ltd. Mirror
CN113711578A (en) * 2019-04-22 2021-11-26 恒久礼品股份有限公司 Intelligent toilet mirror loudspeaker system
CN110248450A (en) * 2019-04-30 2019-09-17 广州富港万嘉智能科技有限公司 A kind of combination personage carries out the method and device of signal light control
IT201900011304A1 (en) * 2019-07-10 2021-01-10 Rebernig Supervisioni Srl Adaptive Lighting Control Method and Adaptive Lighting System
WO2021098191A1 (en) * 2019-11-21 2021-05-27 天津九安医疗电子股份有限公司 Method for automatically adjusting illumination level of target scene, and intelligent illumination control system
CN111182233A (en) * 2020-01-03 2020-05-19 宁波方太厨具有限公司 Control method and system for automatic light supplement of shooting space
CN111586941A (en) * 2020-04-24 2020-08-25 苏州华普物联科技有限公司 Intelligent illumination control method based on neural network algorithm
CN113900384A (en) * 2021-10-13 2022-01-07 达闼科技(北京)有限公司 Method and device for interaction between robot and intelligent equipment and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种新型可调节的自适应区域调光方法;张涛;胡孟阳;杜文丽;王昊;;激光与光电子学进展(12);全文 *

Also Published As

Publication number Publication date
WO2023134743A1 (en) 2023-07-20
CN114364099A (en) 2022-04-15

Similar Documents

Publication Publication Date Title
CN114364099B (en) Method for adjusting intelligent light equipment, robot and electronic equipment
US10952296B2 (en) Lighting system and method
US11887251B2 (en) System and techniques for patch color correction for an immersive content production system
US10261749B1 (en) Audio output for panoramic images
CN109472738A (en) Image irradiation correcting method and device, electronic equipment and storage medium
US11978154B2 (en) System and techniques for lighting adjustment for an immersive content production system
CN109618088A (en) Intelligent camera system and method with illumination identification and reproduction capability
CN109618089A (en) Intelligentized shooting controller, Management Controller and image pickup method
CN109993835A (en) A kind of stage interaction method, apparatus and system
US20230171508A1 (en) Increasing dynamic range of a virtual production display
Gaddy Media design and technology for live entertainment: Essential tools for video presentation
AU2022202424B2 (en) Color and lighting adjustment for immersive content production system
US11762481B2 (en) Light capture device
US20220343562A1 (en) Color and lighting adjustment for immersive content production system
NZ787202A (en) Color and lighting adjustment for immersive content production system
WO2023094882A1 (en) Increasing dynamic range of a virtual production display
WO2023094873A1 (en) Increasing dynamic range of a virtual production display
WO2023094880A1 (en) Increasing dynamic range of a virtual production display
WO2023094875A1 (en) Increasing dynamic range of a virtual production display
WO2023094874A1 (en) Increasing dynamic range of a virtual production display
WO2023094870A1 (en) Increasing dynamic range of a virtual production display
WO2023094877A1 (en) Increasing dynamic range of a virtual production display
WO2023094876A1 (en) Increasing dynamic range of a virtual production display
WO2023094881A1 (en) Increasing dynamic range of a virtual production display
WO2023094871A1 (en) Increasing dynamic range of a virtual production display

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 201111 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Applicant after: Dayu robot Co.,Ltd.

Address before: 200245 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Applicant before: Dalu Robot Co.,Ltd.

GR01 Patent grant
GR01 Patent grant