WO2019085829A1 - 控制系统的处理方法、装置、存储介质和电子装置 - Google Patents

控制系统的处理方法、装置、存储介质和电子装置 Download PDF

Info

Publication number
WO2019085829A1
WO2019085829A1 PCT/CN2018/112047 CN2018112047W WO2019085829A1 WO 2019085829 A1 WO2019085829 A1 WO 2019085829A1 CN 2018112047 W CN2018112047 W CN 2018112047W WO 2019085829 A1 WO2019085829 A1 WO 2019085829A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
image
image information
distance
sub
Prior art date
Application number
PCT/CN2018/112047
Other languages
English (en)
French (fr)
Inventor
周扬
王金桂
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2019085829A1 publication Critical patent/WO2019085829A1/zh
Priority to US16/594,565 priority Critical patent/US11275239B2/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/04Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • H04N13/117Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/254Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0129Head-up displays characterised by optical features comprising devices for correcting parallax
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0132Head-up displays characterised by optical features comprising binocular systems
    • G02B2027/0136Head-up displays characterised by optical features comprising binocular systems with a single image source for both eyes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Definitions

  • Embodiments of the present invention relate to the field of virtual reality, and in particular, to a processing method, apparatus, storage medium, and electronic device of a control system.
  • the related technology is based on the binocular camera of the binocular camera, and an additional depth camera is provided to obtain the depth information, so that the actor can use virtual reality (Virtual Reality, for short live broadcast through the binocular camera). Watch the show for the VR) glasses case to experience the live feeling of face-to-face with the actors.
  • virtual reality Virtual Reality, for short live broadcast through the binocular camera.
  • Watch the show for the VR glasses case to experience the live feeling of face-to-face with the actors.
  • the problem with this method is the introduction of a new camera, which increases the hardware cost of the control system.
  • the related art also processes a picture obtained by a binocular camera through a complicated algorithm, and calculates depth information of each pixel in the image. Although this method can obtain accurate depth information, the computational cost is huge and cannot be real-time.
  • Embodiments of the present invention provide a processing method, apparatus, storage medium, and electronic device of a control system, so as to at least solve the technical problem of high control cost of a related art control system.
  • a processing method of a control system includes: acquiring, by a target imaging device in the control system, first image information of a target object currently moving in a real scene; acquiring a first distance corresponding to the first image information, wherein the first distance is a target imaging device and The distance between the target objects; the target parameters of the control system are adjusted according to the first distance, wherein the target parameters are used to control the control system to output media information to the virtual reality device, the virtual reality device is connected to the control system, and the media information and the target object are The moving information corresponding to the moving in the real scene corresponds to the first distance.
  • a processing apparatus of a control system comprising one or more processors, and one or more memories storing program units, wherein the program units are executed by a processor
  • the program unit includes: a first acquiring unit configured to acquire first image information of a target object currently moving in a real scene by a target imaging device in the control system; and a second acquiring unit configured to acquire the first image information Corresponding first distance, wherein the first distance is a distance between the target imaging device and the target object; the adjusting unit is configured to adjust a target parameter of the control system according to the first distance, wherein the target parameter is used to control the control system to virtual
  • the real device outputs media information, and the virtual reality device is connected to the control system.
  • the media information corresponds to the mobile information that the target object moves in the real scene, and the mobile information includes the first distance.
  • a storage medium is also provided.
  • a computer program is stored in the storage medium, and the computer program is configured to execute the method of the embodiment of the present invention at runtime.
  • an electronic device includes a memory and a processor, wherein the computer stores a computer program, and the processor is configured to execute a computer program to perform the method of an embodiment of the present invention.
  • the first image information of the target object currently moving in the real scene is acquired by the target imaging device in the control system; and the first distance corresponding to the first image information is acquired, wherein the first distance is the target a distance between the camera device and the target object; adjusting a target parameter of the control system according to the first distance, wherein the target parameter is used to control the control system to output media information to the virtual reality device, the virtual reality device is connected to the control system, and the media information is
  • the target object corresponds to the mobile information moving in the real scene, and the mobile information includes the first distance.
  • the first distance between the target imaging device and the target object can be acquired through the first image information of the target object at a low cost, and the control system is further adjusted according to the first distance.
  • a target parameter by using the target parameter control control system to output media information corresponding to the movement information of the target object to the virtual reality device, thereby avoiding manual adjustment of the target parameter of the control system, and realizing the target parameter of the control system
  • the purpose of the control control system outputting the media information to the virtual reality device achieves the technical effect of reducing the control cost of the control system, thereby solving the technical problem of the control cost of the related art control system.
  • FIG. 1 is a schematic diagram of a hardware environment of a processing method of a control system according to an embodiment of the present invention
  • FIG. 2 is a flow chart of a processing method of a control system according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of a control system in accordance with an embodiment of the present invention.
  • FIG. 4 is a flowchart of a processing method of a computing center device of a control system according to an embodiment of the present invention
  • FIG. 5 is a schematic diagram of a correspondence relationship between a parallax and a distance according to an embodiment of the present invention
  • FIG. 6 is a schematic diagram of a control effect of a virtual eyewear experience control system according to an embodiment of the invention.
  • FIG. 7 is a schematic diagram of another control effect of a virtual glasses experience control system according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of another control effect of a virtual glasses experience control system according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of a processing apparatus of a control system according to an embodiment of the present invention.
  • FIG. 10 is a block diagram showing the structure of an electronic device according to an embodiment of the invention.
  • an embodiment of a processing method of a control system is provided.
  • the processing method of the foregoing control system may be applied to a hardware environment composed of the server 102, the control system 104 including the target imaging device, and the virtual reality device 106 as shown in FIG. 1.
  • 1 is a schematic diagram of a hardware environment of a processing method of a control system according to an embodiment of the present invention.
  • the server 102 is connected to the control system 104 via a network, including but not limited to a wide area network, a metropolitan area network, or a local area network.
  • the control system 104 is not limited to a stage control system or the like.
  • the processing method of the control system of the embodiment of the present invention may be executed by the server 102, may be performed by the control system 104, or may be jointly executed by the server 102 and the control system 104.
  • the processing method of the control system executing the control system of the embodiment of the present invention may also be performed by a client installed thereon.
  • the virtual reality device 106 is not limited to: a virtual reality helmet, a virtual reality glasses, a virtual reality integrated device, etc., and is used for the user to experience media information output by the control system to the virtual reality device, and the media information is not limited to the sound information and the light. Information, etc.
  • FIG. 2 is a flow chart of a method of processing a control system in accordance with an embodiment of the present invention. As shown in FIG. 2, the method may include the following steps:
  • step S202 the first image information of the target object currently moving in the real scene is acquired by the target imaging device in the control system.
  • the control system may be a performance stage control system for controlling the performance effect of the actors on the stage, or controlling the performance effects of the actors in the live room, including sound effects, lighting effects, etc. Control effect.
  • the control system includes a target imaging device, which may have two cameras, that is, the target imaging device is a binocular imaging device, for example, a binocular camera including a left camera and a right camera, a binocular camera, and the like.
  • the binocular imaging device of the embodiment obtains the synchronous exposure image through the calibrated dual camera by using the bionics principle, and then calculates the three-dimensional depth information of the acquired two-dimensional image pixel.
  • the realistic scene of this embodiment includes a performance scene of the actor on the stage.
  • This embodiment acquires first image information of a target object currently moving in a real scene by the above-described target imaging apparatus.
  • the target object of this embodiment may be an object such as an actor, an anchor, an object, or the like in a real scene, wherein the target object moves in a real scene, that is, the distance between the target object and the target imaging device changes, through the target imaging device.
  • the first image information of the target object may be collected in real time, and the first image information may be image information of the target object collected in real time, including image information, video information, etc. acquired by the binocular imaging device, and may also include the target imaging device.
  • the difference between the midpoint lateral coordinate of the image acquired by the left camera and the midpoint lateral coordinate of the image acquired by the right camera, that is, the first image information further includes image parallax, and no limitation is made herein.
  • Step S204 Acquire a first distance corresponding to the first image information.
  • the first distance is the distance between the target imaging device and the target object.
  • the target imaging device of the control system After acquiring the first image information of the current target object by the target imaging device of the control system, acquiring a first distance corresponding to the first image information, where the first distance is a real-time distance of the target object from the target imaging device, and may be a target imaging The depth information of the device to the target object.
  • the embodiment obtains a first distance corresponding to the first image information from the correspondence relationship table, where the correspondence relationship table includes a pre-established image parallax, a data relationship between the target object and a distance between the target cameras, and may be according to The image disparity in the first image information is searched from the correspondence table to the first distance corresponding to the current time.
  • Step S206 adjusting the target parameter of the control system according to the first distance.
  • the target parameter of the control system is adjusted according to the first distance, wherein the target parameter is used to control the control system to the virtual reality device.
  • the media information is output, and the virtual reality device is connected to the control system, and the media information corresponds to the mobile information that the target object moves in the real scene, and the mobile information includes the first distance.
  • the target parameter of the control system of this embodiment may be a control parameter that needs to achieve a certain performance effect when the stage is controlled, and is used for controlling the control system to output media information to the virtual reality device.
  • the target parameter may include a sound parameter of the control system, a light parameter, and the like, wherein the sound parameter is used to control the control system to output sound information to the virtual reality device, and the light parameter is used to control the control system to output the light information to the virtual reality device.
  • the virtual reality device of the embodiment is connected to a control system, and the virtual reality device can map image information and media information acquired by the control system, and can be a VR glasses box, which can be used by a user watching the performance to experience control of the control system. effect.
  • the target parameter of the control system is adjusted according to the change state of the distance between the target object and the target imaging device.
  • the sound parameters of the control system are adjusted, for example, the sound intensity of the control system (in dB), and optionally, the sound intensity is inversely proportional to the square of the distance.
  • the sound is made larger by adjusting the sound parameters of the control system; when the target object is away from the target imaging device during the moving of the real scene, the sound parameters of the control system are adjusted. The sound is smaller so that the sound that the user hears while watching the performance is more realistic, giving the user an immersive experience.
  • the embodiment does not limit the adjustment rules of the control system for adjusting the sound parameters. Any adjustment rule that achieves a louder sound when the target object is close to the target imaging device and a smaller sound when the target object is away from the target imaging device is It is within the scope of embodiments of the invention.
  • the embodiment may also adjust the lighting parameters of the control system to control the light. For example, when the distance between the target object and the target imaging device decreases, the pan/tilt that controls the light automatically moves the focus of the light to the target camera. The area near the device; when the distance between the target object and the target camera device increases, the pan/tilt that controls the light automatically moves the focus of the light to an area farther from the target camera device, so that the control spotlight can follow the target object. Move back and forth to ensure that the light shines well on the actor, giving the user an immersive experience.
  • the embodiment does not specifically limit the adjustment rules for adjusting the light parameters. Any one can realize that when the target object approaches the target imaging device, the pan/tilt of the light automatically moves the focus of the light to an area close to the target imaging device. When the distance between the target object and the target imaging device increases, the adjustment rule that the pan/tilt that controls the light automatically moves the focus of the light to the region farther from the target imaging device is within the scope of the embodiment of the present invention. , here is no longer an example.
  • the media information of this embodiment corresponds to the mobile information that the target object moves in the real scene.
  • the media information includes sound information, lighting information, and the like, and information required by the user when watching the stage performance, wherein the sound information includes sound information having a certain size, for example, when the target object approaches the target imaging device, the sound becomes louder. Sound information; when the target object is far away from the target imaging device, it is a sound information with a small sound.
  • the virtual reality device for example, when viewing through the VR glasses box
  • the target object is closer to the user
  • the faraway what the user experiences is that the sound that is heard becomes smaller, so that the sound that the user hears while watching the performance is more realistic, bringing the user an immersive experience.
  • the media information is light information that is automatically generated when the pan/tilt of the light moves the focus of the light to an area close to the target imaging device; when the target object is close to the target imaging device, The media information is the light information that the pan/tilt of the light automatically moves when the light focus is moved to the far area of the target camera device, thereby ensuring the light of the light on the actor is good, and bringing the user an immersive experience.
  • this embodiment may include other information for stage control in addition to the above media information, and no limitation is imposed herein.
  • the step S202 to the step S206 the first image information of the target object currently moving in the real scene by acquiring the target imaging device in the control system; acquiring a first distance corresponding to the first image information, where the first distance is the target a distance between the camera device and the target object; adjusting a target parameter of the control system according to the first distance, the target parameter is used to control the control system to output media information to the virtual reality device, and the virtual reality device is connected with the control system, the media information and the target
  • the object corresponds to the mobile information moving in the real scene, and the mobile information includes the first distance.
  • the first distance between the target imaging device and the target object can be acquired through the first image information of the target object at low cost, and the target of the control system is adjusted according to the first distance.
  • a parameter, by which the control system outputs the media information corresponding to the movement information of the target object to the virtual reality device, thereby avoiding manual adjustment of the target parameter of the control system, and realizing control by the target parameter of the control system The purpose of the control system outputting the media information to the virtual reality device achieves the technical effect of reducing the control cost of the control system, thereby solving the technical problem of the control cost of the related technology control system.
  • step S204 acquiring the first distance corresponding to the first image information comprises: acquiring the first image parallax between the first sub-image information and the second sub-image information in the first image information.
  • the first sub-image information is obtained by the first camera
  • the second sub-image information is obtained by the second camera
  • the first camera and the second camera are deployed in the target imaging device, the first image.
  • the disparity is used to represent a difference between the first image of the target object indicated by the first sub-image information and the second image of the target object indicated by the second sub-image information; and the disparity with the first image is obtained in the target correspondence table Corresponding first distance.
  • the image information acquired by the target imaging device includes image parallax, that is, the first image information includes a first image disparity for characterizing the target object indicated by the first sub-image information.
  • the difference between the first image and the second image of the target object indicated by the second sub-image information is obtained by the first sub-image information and the second sub-image information, for example, the first image disparity passes through the first sub-image
  • the difference between the midpoint lateral coordinate and the midpoint lateral coordinate of the second sub-image is obtained.
  • the first sub-image information is obtained by the first camera in the target imaging device, and the first camera may be a left camera or a right camera of the binocular camera; the second sub-image information is included in the target imaging device. The second camera captures the target object.
  • the second camera device may be the right camera of the binocular camera, and the first camera is the right camera of the binocular camera.
  • the second imaging device may be a left camera of the binocular camera.
  • the target correspondence table of the embodiment is a data relationship table of a relationship between the image parallax and the distance established in advance, and includes a correspondence relationship between the image parallax and the distance, and the image parallax is used to represent the image capturing device acquired by the target image capturing device.
  • the difference between the different images, including the first image disparity of this embodiment is the distance between the target imaging device and the target object, including the first distance of this embodiment.
  • the distance can be determined based on the image parallax and the correspondence between the image parallax and the distance in the case where the image parallax is determined, thereby achieving the purpose of quickly and inexpensively acquiring the distance corresponding to the image parallax.
  • the target correspondence relationship table stores a correspondence relationship between the first image disparity and the first distance, and after acquiring the first image disparity between the first sub-image information and the second sub-image information, the target correspondence relationship Obtaining a first distance corresponding to the disparity of the first image in the table, so that the first distance corresponding to the parallax of the first image can be quickly and cost-effectively acquired in the target correspondence table, and then the target of the control system is adjusted according to the first distance
  • the parameter and the output of the media information corresponding to the target parameter achieve the technical effect of reducing the control cost of the control system and improve the user experience.
  • the target correspondence table may store other correspondences between the image disparity and the distance in advance, in addition to storing the correspondence between the first image disparity and the first distance.
  • the image parallax corresponding to the distance may be acquired when the distance between the target imaging device and the target object is determined, so that the distance and the corresponding image parallax are stored in the target correspondence table, for example, setting the target imaging device and The distance between the target objects is D1 meters, the image information of the target object from the target imaging device D1 is acquired by the target imaging device, the image parallax is obtained from the image information, and the image parallax corresponding to the D1 meter and the D1 meter is further obtained.
  • the target correspondence table Stored in the target correspondence table; set the distance between the target camera device and the target object to be D2 meters.
  • the D2 meter is different from D1 meters, and the image information of the target object from the target imaging device D2 is acquired by the target imaging device.
  • the image disparity is stored in the target correspondence table.
  • acquiring the first image disparity between the first sub-image information and the second sub-image information in the first image information comprises: acquiring a first midpoint lateral coordinate in the first sub-image information Wherein the first midpoint lateral coordinate is a lateral coordinate of the center point of the first image in the target coordinate system; the second midpoint lateral coordinate in the second sub-image information is obtained, wherein the second midpoint lateral coordinate is The lateral coordinate of the center point of the two images in the target coordinate system; the difference between the first midpoint lateral coordinate and the second midpoint lateral coordinate is determined as the first image disparity.
  • the image information includes a midpoint lateral coordinate, which is a lateral coordinate value of the center point of the image in the target coordinate system.
  • the first sub-image information is the screen information collected by the left camera, and the screen information may be a left portrait, and the left portrait is at a center point in the target coordinate system.
  • the first midpoint lateral coordinate of the left portrait is X1; the second midpoint lateral coordinate in the second sub-image information is acquired, and the second sub-image information may be the screen information collected by the right camera.
  • the screen information may be a right portrait, the center point of the right portrait in the target coordinate system is (X2, Y2), and the second midpoint horizontal coordinate of the right portrait is X2.
  • Obtaining a difference between the first midpoint lateral coordinate and the second midpoint lateral coordinate after acquiring the first midpoint lateral coordinate in the first sub image information and the second midpoint lateral coordinate in the second sub image information Determining a difference between the first midpoint lateral coordinate and the second midpoint lateral coordinate as the first image parallax, that is, (X1-X2) may be determined as the first image parallax, thereby acquiring the above
  • the first distance corresponding to the image disparity adjusts the target parameter of the control system according to the first distance, and outputs the media information corresponding to the target parameter, thereby achieving the technical effect of reducing the control cost of the control system and improving the user experience.
  • acquiring the first distance corresponding to the first image disparity in the target correspondence table includes: searching for the target image disparity with the smallest difference between the disparity and the first image disparity in the target correspondence table The distance corresponding to the parallax of the target image is determined as the first distance in the target correspondence table.
  • the target correspondence table is a data relationship between the pre-established image disparity and the distance, but the image disparity is calculated in real time
  • the calculated The image disparity may not be in the pre-established correspondence table.
  • the target image disparity which is the smallest difference between the disparity of the first image and the disparity of the first image may be found in the target correspondence table, and the difference is an absolute difference, that is, the target correspondence table is searched.
  • the image parallax closest to the disparity of the first image is determined as the target image disparity that is closest to the disparity of the first image, and in the target correspondence table, the distance corresponding to the disparity of the target image is determined as the first distance, Similarly, the target parameter of the control system can be adjusted according to the first distance, and the media information corresponding to the target parameter is output, thereby achieving the technical effect of reducing the control cost of the control system and improving the user experience.
  • the method before acquiring the first distance corresponding to the disparity of the first image in the target correspondence table, the method further includes: acquiring, by the target imaging device, second image information of the target object, where the target object a distance from the target imaging device is a first target distance; and a second image disparity between the third sub-image information and the fourth sub-image information is acquired in the second image information, wherein the third sub-image information is first
  • the camera obtains the target object, the fourth sub-image information is obtained by the second camera, and the second image disparity is used to represent the third image and the fourth sub-image information of the target object indicated by the third sub-image information.
  • a difference between the fourth image of the indicated target object establishing a correspondence between the first target distance and the second image disparity in the target correspondence table; acquiring third image information of the target object by the target imaging device, wherein the target object The distance from the target imaging device is a second target distance, and the second target distance is different from the first target distance; Obtaining, in the image information, a third image disparity between the fifth sub-image information and the sixth sub-image information, wherein the fifth sub-image information is obtained by the first camera, and the sixth sub-image information is obtained by the second camera pair.
  • the third image disparity is used to represent a difference between the fifth image of the target object indicated by the fifth sub-image information and the sixth image of the target object indicated by the sixth sub-image information; A correspondence relationship between the second target distance and the third image disparity is established in the table.
  • the data relationship in the target correspondence table is established before the first distance corresponding to the first image disparity is acquired from the target correspondence table.
  • the distance may be the farthest distance between the target camera device and the target object when the user has a three-dimensional (3D) experience.
  • the target object is placed in an area away from the first target distance of the target imaging device, and the second image information obtained by capturing the target object in real time is acquired by the target imaging device.
  • the second image information After acquiring the second image information, acquiring, from the second image information, a second image disparity between the third sub image information and the fourth sub image information, the second image disparity being used to represent the third sub image information a difference between the indicated third target image of the target object and the fourth image of the target object indicated by the fourth sub-image information, the second image disparity being obtained by the third sub-image information and the fourth sub-image information, for example, the second The image parallax is obtained by the difference between the midpoint lateral coordinate of the third sub image and the midpoint lateral coordinate of the fourth sub image.
  • the third sub-image information is obtained by the first camera in the target imaging device for capturing a target object in the target correspondence table, and the first camera may be a left camera of the binocular camera; the fourth sub-image information is The second camera in the target imaging device captures the target object.
  • the first camera is the left camera of the binocular camera
  • the second camera device may be the right camera of the binocular camera.
  • the target correspondence table a correspondence relationship between the first target distance and the second image disparity is established.
  • the target imaging device acquires, by the target imaging device, third image information of the target object that is a second target distance from the target imaging device, where the second target distance is a preset distance, for example, D2 meters, which may be a relative first target.
  • the distance of the distance change for example, the second target distance is a distance that varies by 5 meters from the first distance.
  • the target object is placed in an area from the second target distance of the target imaging device, and the third image information obtained by capturing the target object in real time is acquired by the target imaging device.
  • the third image disparity After acquiring the third image information, acquiring, from the third image information, a third image disparity between the fifth sub image information and the sixth sub image information, the third image disparity being used to represent the fifth sub image information a difference between the indicated fifth target image of the target object and the sixth image of the target object indicated by the sixth sub-image information, the third image disparity being obtained by the fifth sub-image information and the sixth sub-image information, for example,
  • the three image parallax is obtained by the difference between the midpoint lateral coordinate of the fifth sub image and the midpoint lateral coordinate of the sixth sub image.
  • the fifth sub-image information is obtained by the first camera in the target imaging device for capturing a target object in the target correspondence table
  • the first camera may be a left camera of the binocular camera
  • the sixth sub-image information is The second camera in the target imaging device captures the target object.
  • the first camera is the left camera of the binocular camera
  • the second camera device may be the right camera of the binocular camera.
  • the target correspondence table a correspondence relationship between the second target distance and the third image parallax is established.
  • the embodiment may continuously change the distance between the target imaging device and the target object, repeat the above steps, and further establish a third correspondence relationship between the third target distance and the fourth image parallax in the target correspondence table, and so on. Thereby, a target correspondence table including a data relationship between image disparity and distance is established.
  • the method for establishing the target correspondence table including the data relationship between the image parallax and the distance is only a preferred embodiment of the embodiment of the present invention, and does not represent the method for establishing the target correspondence table of the present invention.
  • any method for establishing a target correspondence table is within the scope of the embodiments of the present invention, and is not illustrated herein.
  • acquiring the second image disparity between the third sub-image information and the fourth sub-image information comprises: acquiring a third midpoint lateral coordinate in the third sub-image information, wherein the third The lateral coordinate of the point is the lateral coordinate of the center point of the third image in the target coordinate system; the fourth midpoint lateral coordinate in the fourth sub-image information is acquired, wherein the fourth midpoint lateral coordinate is the center point of the fourth image a lateral coordinate in the target coordinate system; a difference between the third midpoint lateral coordinate and the fourth midpoint lateral coordinate is determined as a second image parallax; and a second between the fifth sub image information and the sixth sub image information is acquired
  • the three image disparity includes: acquiring a fifth midpoint lateral coordinate in the fifth sub image information, wherein the fifth midpoint lateral coordinate is a lateral coordinate of the center point of the fifth image in the target coordinate system; and acquiring the sixth sub image information a sixth midpoint lateral coordinate, wherein the sixth midpoint lateral coordinate is a lateral coordinate
  • the third midpoint horizontal coordinate in the third sub-image information is acquired.
  • the third sub-image information is the screen information collected by the left camera, and the screen information may be The left portrait, the center point of the left portrait in the target coordinate system is (X3, Y3), the third midpoint lateral coordinate of the left portrait is X3; the fourth midpoint lateral coordinate in the fourth sub-image information is acquired,
  • the four sub-image information may be the screen information collected by the right camera, and the screen information may be a right portrait, the center point of the right portrait in the target coordinate system is (X4, Y4), and the fourth midpoint lateral coordinate of the right portrait For X4.
  • Obtaining a difference between the third midpoint lateral coordinate and the fourth midpoint lateral coordinate after acquiring the third midpoint lateral coordinate in the third sub image information and the fourth midpoint lateral coordinate in the fourth sub image information Determining a difference between the third midpoint lateral coordinate and the fourth midpoint lateral coordinate as the second image parallax, that is, determining (X3-X4) as the second image parallax, and further in the target correspondence table And establishing a correspondence between the first target distance and the second image parallax.
  • the fifth midpoint lateral coordinate in the fifth sub-image information is obtained.
  • the fifth sub-image information is the image information collected by the left camera, and may be a left portrait, and the left portrait is in the center of the target coordinate system.
  • the point is (X5, Y5), the fifth midpoint horizontal coordinate of the left portrait is X5; the sixth midpoint lateral coordinate in the sixth sub-image information is acquired, and the sixth sub-image information may be the screen information collected by the right camera.
  • the picture information may be a right portrait, the center point of the right portrait in the target coordinate system is (X6, Y6), and the sixth midpoint horizontal coordinate of the right portrait is X6.
  • Obtaining a difference between the fifth midpoint lateral coordinate and the sixth midpoint lateral coordinate after acquiring the fifth midpoint lateral coordinate in the fifth sub image information and the sixth midpoint lateral coordinate in the sixth sub image information Determining a difference between the fifth midpoint lateral coordinate and the sixth midpoint lateral coordinate as the third image parallax, that is, determining (X5-X6) as the third image parallax, and further in the target correspondence table And establishing a correspondence between the second target distance and the third image disparity.
  • the correspondence between the other target distances and the image disparity can also be performed by the foregoing method, and is not illustrated here.
  • acquiring the third midpoint lateral coordinate in the third sub image information includes at least one of: in a case where the first image information is image information of a human face, the center point of the left eye image is The average value between the lateral coordinate in the target coordinate system and the lateral coordinate of the right eye image in the target coordinate system is determined as the first midpoint lateral coordinate; in the case where the first image information is the image information of the face Next, the lateral coordinate of the center point of the nose image in the target coordinate system is determined as the first midpoint lateral coordinate; in the case where the first image information is the image information of the portrait, the center point of the left hand image is in the target coordinate system The average value between the lateral coordinate and the lateral coordinate of the right-hand image in the target coordinate system is determined as the first midpoint lateral coordinate; in the case where the first image information is the image information of the portrait, the left arm image is The average value of the center point of the center point in the target coordinate system and the horizontal coordinate of the center point of the right arm image in the target
  • an open source face recognition algorithm may be used to obtain an image area acquired by the first camera and the second camera in the target imaging device, for example, using an open source face recognition algorithm to obtain a left camera and a right camera. a portrait area, and calculating a third midpoint lateral coordinate of the third sub-image information according to a certain rule, and a fourth midpoint lateral coordinate in the fourth sub-image information, respectively obtaining a lateral coordinate of the left camera and a lateral coordinate of the right camera .
  • the above rules can be, but are not limited to, the following rules:
  • the average coordinate of the binocular pixel coordinates in the face is determined as the third midpoint lateral coordinate, that is, the pixel coordinates of the two eyes in the face
  • the average value is determined as the third midpoint lateral coordinate
  • the second image information is the image information of the face
  • the pixel coordinates of the nose in the face are determined as the third midpoint lateral coordinate
  • the second image information is In the case of the image information of the portrait, the average coordinates of the pixel coordinates of the left hand and the right hand in the portrait are determined as the third midpoint lateral coordinate, that is, the average value of the pixel coordinate values of the left hand and the right hand in the portrait is determined as the first Three midpoint lateral coordinates; in the case where the second image information is image information of a portrait, the average value of the pixel coordinate values of the left arm and the right arm in the portrait is determined as the third midpoint lateral coordinate, that is, in the portrait In the middle, the average value of the
  • the method for acquiring the first midpoint lateral coordinate, the second midpoint lateral coordinate, the fourth midpoint lateral coordinate, and the fifth midpoint lateral coordinate enumerated in this embodiment may all determine the third midpoint through the foregoing.
  • the rules for obtaining the horizontal coordinates are not illustrated here.
  • adjusting the target parameter of the control system according to the first distance comprises: increasing the sound parameter of the control system when the change state of the first distance indicates that the first distance becomes smaller, wherein the target parameter The sound parameter is included, the media information includes sound information, and the sound parameter is used to control the control system to output sound information to the virtual reality device; and when the change state of the first distance indicates that the first distance becomes large, the sound parameter of the control system is reduced.
  • the target parameter of this embodiment may include a sound parameter, when the target parameter of the control system is adjusted according to the first distance, when the change state of the first distance indicates that the first distance becomes smaller, the sound parameter of the control system is increased, such that When the user experiences through the virtual glasses, if the target object is closer to the user, the sound heard by the user will be larger.
  • the sound intensity (unit: dB) is inversely proportional to the square of the first distance; at the first distance The change state indicates that the first distance is large, and the sound parameter of the control system is reduced, so that when the target object is further away from the user, the sound heard by the user is smaller.
  • the control system can also automatically detect the change of the distance between the target object and the target camera device, and adjust the control system to output the sound information to the virtual reality device in real time, that is, adjust the user reception.
  • the size of the sound so that the user experience the scene more realistic when watching the stage performance, to simulate the user-to-face live or communication experience.
  • adjusting the target parameter of the control system according to the first distance includes: adjusting a light parameter of the control system to a first value when the change state of the first distance indicates that the first distance becomes smaller, So that the light of the control system is focused on the target area of the target imaging device, wherein the target parameter includes a light parameter, the media information includes light information, and the light parameter is used to control the control system to output the light information to the virtual reality device; at the first distance
  • the change state indicates that the first distance becomes larger, and the light parameter of the control system is adjusted to a second value such that the light of the control system is focused outside the target area of the target imaging device.
  • the target parameter of the embodiment may include a light parameter.
  • the light parameter of the control system is adjusted to be the first when the change state of the first distance indicates that the first distance becomes smaller.
  • the first value of the light parameter causes the control system's light to be focused within the target area of the target imaging device, such that when the distance between the target object and the target imaging device is reduced, for example, when the distance of the object/actor to the camera is reduced In hours, the pan/tilt of the light can automatically move the focus of the light to an area close to the camera; when the change state of the first distance indicates that the first distance becomes larger, the light parameter of the control system is adjusted to the second value,
  • the binary light parameter makes the control system's light focus on the target area of the target camera device, and realizes that the spotlight moves with the actor moving back and forth to ensure that the light of the light is good on the actor.
  • step S204 acquiring the first distance corresponding to the first image information includes: acquiring, by the computing center device in the control system, a first distance corresponding to the first image information, where the computing center device And connecting to the target imaging device; adjusting the target parameter of the control system according to the first distance comprises: receiving, by the controller in the control system, the first distance sent by the computing center device, and adjusting the target parameter of the control system according to the first distance.
  • the control system of this embodiment includes a target imaging device, a computing center device, and a controller.
  • the target imaging device can be a binocular camera.
  • the target imaging device is configured to collect the first image information in real time, and transmit the first image information to the computing center device, where the first image information may be a binocular picture or video, and the target imaging device and the computing center device are wireless or wired.
  • the wireless connection may be microwave communication, infrared communication, laser communication, etc.
  • the wired mode may be a Universal Serial Bus (USB), a network cable, etc., and no limitation is imposed herein.
  • USB Universal Serial Bus
  • the computing center device obtains a first distance corresponding to the first image information, for example, the computing center device is configured to complete processing on the binocular picture or video, and obtain a first distance of the target object from the target imaging device, for example, establishing image parallax and The data relationship of the depth information, real-time calculation of image parallax, obtaining real-time depth data, and issuing control commands to the controller.
  • the controller receives the first distance sent by the computing center device, triggers the control logic of the controller, and adjusts the target parameter of the control system according to the first distance, and outputs the media information corresponding to the target parameter by the controller.
  • This embodiment utilizes the principle that the target imaging device captures different image parallax sizes of the same target object at different distances, and obtains the correspondence between the distance and the corresponding image parallax size according to the method of actual measurement or optical simulation, and then according to the real-time and low-cost basis.
  • the image parallax in the image information of the target object calculates the distance of the target object camera, completes the automatic control of the stage lighting and the sound, and achieves the technical effect of reducing the control cost of the control system, thereby improving the user experience.
  • the method according to the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course, by hardware, but in many cases, the former is A better implementation.
  • the technical solution of the present invention in essence or the contribution to the related art can be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk, CD-ROM).
  • the instructions include a number of instructions for causing a terminal device (which may be a cell phone, computer, server, or network device, etc.) to perform the methods described in various embodiments of the present invention.
  • This embodiment utilizes the principle that the binocular camera captures different parallax sizes of the same object/user at different distances, and obtains the correspondence between different depth distances of the object and the corresponding image parallax size according to the method of actual measurement or optical simulation, thereby real-time low cost. Calculate the distance between the object/user and the camera to complete the automatic control of the stage lighting and sound.
  • control system includes: a binocular camera 1, a computing center device 2, and a controller 3.
  • the binocular camera 1 is the first module of the stage control system for collecting the binocular picture or video information of the object in real time and transmitting it to the computing center device 2.
  • the computing center device 2 is a second module of the stage control system for completing the processing of the binocular picture or video, obtaining the real-time distance of the user from the camera, and issuing a control command to the controller 3.
  • the controller 3 is a third module of the stage control system, and is configured to receive a control instruction of the computing center device 2, and complete other controls such as a volume controller (volume control), a light regulator (light control), and the like through the control command.
  • a volume controller volume control
  • a light regulator light control
  • the binocular camera of the embodiment of the present invention will be described below.
  • a binocular camera refers to the real-time acquisition of images through two cameras and transmission to a computing center device.
  • the binocular camera is connected to the computing center device and can be connected to the computing center device via wireless or wired (eg USB, network cable, etc.) without any restrictions.
  • the computing center device of the embodiment of the present invention is introduced below.
  • FIG. 4 is a flow chart of a method of processing a computing center device of a control system in accordance with an embodiment of the present invention. As shown in FIG. 4, the method includes the following steps:
  • step S401 a data relationship between the disparity and the depth information is established.
  • the calculation center device obtains the real-time picture (binocular picture or video) captured by the binocular camera.
  • the computing center device uses an open source face recognition algorithm to obtain a portrait area of the left camera and the right camera, and calculates a midpoint lateral coordinate of the left and right portraits according to a certain rule.
  • the midpoint horizontal coordinate of the left portrait is The left camera's x1
  • the right portrait's midpoint horizontal coordinate is the right camera's x2.
  • the foregoing rules for respectively calculating the midpoint lateral coordinates of the left and right portraits may be, but are not limited to, the following:
  • the average of the pixel coordinates of the left arm and the right arm In the portrait, the average of the pixel coordinates of the left arm and the right arm.
  • the difference between the midpoint lateral coordinate x1 of the left portrait and the midpoint lateral coordinate x2 of the right portrait in the left camera screen and the right camera screen is calculated.
  • the distance D2 from the camera to the object is constantly changed (D2 is a known value), and the lateral coordinate difference of the center pixel points of the left and right portraits is calculated.
  • the difference between the midpoint lateral coordinate x1 of the left portrait and the midpoint lateral coordinate x2 of the right portrait can be obtained under different distances, and the correspondence between the parallax and the distance of the same object by the left and right cameras can be obtained.
  • FIG. 5 is a schematic diagram of a correspondence relationship between a parallax and a distance according to an embodiment of the present invention.
  • the distance D1 is the image parallax corresponding to the midpoint lateral coordinate X1 of the left camera screen and the midpoint lateral coordinate X2 of the right camera screen, wherein the left portrait center point (X1, Y1), the right portrait center point (X2, Y2).
  • the distance D2 meters corresponds to the parallax of the midpoint lateral coordinate X3 of the left camera picture and the midpoint lateral coordinate X4 of the right camera picture, wherein the left portrait center point (X3, Y3) and the right portrait center point (X4, Y4).
  • Step S402 calculating image disparity in real time to obtain real-time depth data.
  • the computing center device obtains the left and right camera images transmitted by the binocular camera in real time, and based on the above method, the difference between the midpoint lateral coordinate X1 of the real left portrait and the midpoint lateral coordinate X2 of the right portrait can be obtained.
  • the reverse search is performed to obtain the distance from the object to the camera at the current time point.
  • the computing center device sends the real-time object/actor-to-camera distance to the controller, triggering the controller's control logic, which includes the following logic: First, the volume-related logic, when the object/actor-to-camera distance is reduced
  • the control system automatically adjusts the volume.
  • the numerical rules for adjustment are not limited here (but the principle that can be followed is that the sound intensity (in dB) is inversely proportional to the square of the distance); second, the logic associated with the light, when the object / When the distance from the actor to the camera is reduced, the pan/tilt of the light automatically moves the focus of the light to an area close to the camera.
  • the adjusted numerical rules are not limited here.
  • the application environment of the embodiment of the present invention may be, but is not limited to, the reference to the application environment in the foregoing embodiment, which is not described in this embodiment.
  • An embodiment of the present invention provides an optional specific application for implementing the processing method of the above control system.
  • FIG. 6 is a schematic diagram of a control effect of a virtual eyewear experience control system according to an embodiment of the invention.
  • FIG. 7 is a schematic diagram of another control effect of a virtual eyewear experience control system according to an embodiment of the present invention. As shown in the display screens shown in FIG. 6 to FIG. 7, the image of the anchor/actor becomes smaller, and the change state of the distance from the binocular camera indicates that the distance between the anchor/actor and the binocular camera becomes large, and the distance is automatically reduced.
  • FIG. 8 is a schematic diagram of another control effect of a virtual eyewear experience control system according to an embodiment of the present invention.
  • the image of the anchor/actor becomes larger, the change state of the distance of the anchor/actor from the binocular camera indicates that the distance becomes smaller, the light parameters of the control system are adjusted, and the adjusted light parameters make the control system
  • the light is focused on the target area of the target camera, so that when the distance between the target object and the target camera is reduced, the pan/tilt of the light can automatically move the focus of the light to an area close to the camera;
  • the display screen shown in 7 shows that the image of the anchor/actor is smaller, the change status of the distance between the anchor/actor and the binocular camera is larger, the lighting parameters of the control system are adjusted, and the adjusted lighting parameters make the lighting of the control system Focus on the target area of the target camera device.
  • the anchor/actor can realize the dynamic control of the light following its position by one person, and realize that the spotlight will move with the actor moving back and forth, so that the image seen by the user is more true, thereby ensuring that when the user watches the live broadcast through the virtual glasses, there is a user and A similar experience of actors face to face.
  • the control system can automatically detect the change of the distance between the actor and the camera, and adjust the size of the sound received by the user in real time to simulate the face-to-face live broadcast or communication experience.
  • the sound will be louder.
  • the anchor is farther away from the user, the sound will be smaller, making the user experience the sound more realistic; the spotlight will move with the actors moving back and forth to ensure that the photo is illuminated.
  • the actor s light is good, and the purpose of the light is followed to ensure that the light is shining on the actor.
  • a processing apparatus of a control system for implementing a processing method of the above control system, comprising one or more processors, and one or more memories storing the program units,
  • the program unit is executed by a processor, and the program unit includes a first acquiring unit, a second acquiring unit, and an adjusting unit.
  • 9 is a schematic diagram of a processing device of a control system in accordance with an embodiment of the present invention. As shown in FIG. 9, the apparatus may include: a first acquisition unit 10, a second acquisition unit 20, and an adjustment unit 30.
  • the first obtaining unit 10 is configured to acquire first image information of a target object currently moving in a real scene by a target imaging device in the control system.
  • the second obtaining unit 20 is configured to acquire a first distance corresponding to the first image information, wherein the first distance is a distance between the target imaging device and the target object.
  • the adjusting unit 30 is configured to adjust a target parameter of the control system according to the first distance, wherein the target parameter is used to control the control system to output media information to the virtual reality device, the virtual reality device is connected to the control system, and the media information and the target object are The moving information corresponding to the moving in the real scene corresponds to the first distance.
  • the first acquiring unit 10, the second obtaining unit 20, and the adjusting unit 30 may be operated in the terminal as a part of the device, and the functions implemented by the foregoing unit may be performed by a processor in the terminal, and the terminal also It can be a smart phone (such as Android phone, iOS phone, etc.), tablet computer, applause computer, and mobile Internet devices (MID), PAD and other terminal devices.
  • the second obtaining unit 20 of the embodiment includes: a first acquiring module and a second acquiring module.
  • the first obtaining module is configured to acquire, in the first image information, a first image disparity between the first sub-image information and the second sub-image information, wherein the first sub-image information is used by the first camera to the target object Obtaining, the second sub-image information is obtained by the second camera, and the first camera and the second camera are deployed in the target imaging device, and the first image disparity is set to represent the target object indicated by the first sub-image information. a difference between the first image and the second image of the target object indicated by the second sub-image information; the second acquisition module is configured to acquire a first distance corresponding to the first image disparity in the target correspondence table.
  • first acquiring module and the second acquiring module may be run in the terminal as part of the device, and the functions implemented by the foregoing module may be performed by a processor in the terminal.
  • the first obtaining module of the embodiment includes: a first acquiring submodule, a second acquiring submodule, and determining a submodule.
  • the first obtaining sub-module is configured to acquire a first midpoint lateral coordinate in the first sub-image information, wherein the first midpoint lateral coordinate is a lateral coordinate of the center point of the first image in the target coordinate system; a second obtaining submodule configured to acquire a second midpoint lateral coordinate in the second sub image information, wherein the second midpoint lateral coordinate is a lateral coordinate of the center point of the second image in the target coordinate system; And a module configured to determine a difference between the first midpoint lateral coordinate and the second midpoint lateral coordinate as the first image disparity.
  • the foregoing first obtaining sub-module, the second obtaining sub-module and the determining sub-module may be run in the terminal as part of the device, and the functions implemented by the above-mentioned modules may be performed by a processor in the terminal.
  • first obtaining unit 10 in this embodiment may be configured to perform step S202 in the embodiment of the present application
  • second obtaining unit 20 in the embodiment may be configured to perform the method in the embodiment of the present application
  • the adjusting unit 30 in this embodiment may be configured to perform step S206 in the embodiment of the present application.
  • the above-mentioned units and modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the contents disclosed in the above embodiments.
  • the foregoing module may be implemented in a hardware environment as shown in FIG. 1 as part of the device, and may be implemented by software or by hardware. Among them, the hardware environment includes the network environment.
  • the first acquiring unit 10 acquires first image information of the target object currently moving in the real scene by the target imaging device in the control system; and acquires the first image information corresponding to the first image information by the second acquiring unit 20 a distance, wherein the first distance is a distance between the target imaging device and the target object; and the target parameter of the control system is adjusted according to the first distance by the adjusting unit 30, wherein the target parameter is used to control the control system to output the media to the virtual reality device
  • the virtual reality device is connected with the control system, the media information corresponds to the moving information of the target object moving in the real scene, the mobile information includes the first distance, and the technical effect of reducing the control cost of the control system is achieved, thereby solving the related technical control
  • the technical problems of the system's control cost are large.
  • an electronic device for implementing the processing method of the above control system.
  • the electronic device of this embodiment is deployed in the control system as part of the control system of the embodiment of the present invention.
  • the electronic device is deployed in the control system 104 shown in FIG.
  • the control system is connected to a virtual reality device, including but not limited to a virtual reality helmet, a virtual reality glasses, a virtual reality integrated machine, etc., for receiving media information output by the control system, for example, receiving the output of the control system. Sound information, lighting information, etc.
  • the electronic device of this embodiment can be connected as a separate part to the control system of the embodiment of the present invention.
  • the electronic device is connected to the control system 104 shown in FIG. A processing method for executing a control system of an embodiment of the present invention by a control system.
  • the control system is connected to the virtual reality device through an electronic device, including but not limited to a virtual reality helmet, a virtual reality glasses, a virtual reality integrated machine, etc., for receiving media information output by the control system through the electronic device, such as Receiving sound information, lighting information, and the like output by the control system through the electronic device.
  • FIG. 10 is a block diagram showing the structure of an electronic device according to an embodiment of the invention.
  • the electronic device may include one or more (only one shown in the figure) processor 101, a memory 103, wherein a computer program may be stored in the memory 103, and the processor 101 may be set to run.
  • the computer program is a processing method of a control system that performs an embodiment of the present invention.
  • the memory 103 can be used to store a computer program and a module, such as a processing method of the control system and a program instruction/module corresponding to the device in the embodiment of the present invention, the processor 101 being configured to run the software stored in the memory 103 The program and the module, thereby performing various functional applications and data processing, that is, implementing the processing method of the above control system.
  • Memory 103 may include high speed random access memory, and may also include non-volatile memory such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
  • memory 103 can further include memory remotely located relative to processor 101, which can be connected to the electronic device over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the electronic device may further include: a transmission device 105 and an input and output device 107.
  • the storage device 105 is configured to receive or transmit data via a network, and can also be used for data transmission between the processor and the memory. Specific examples of the above network may include a wired network and a wireless network.
  • the transmission device 105 includes a Network Interface Controller (NIC) that can be connected to other network devices and routers via a network cable to communicate with the Internet or a local area network.
  • the storage device 105 is a Radio Frequency (RF) module for communicating with the Internet wirelessly.
  • NIC Network Interface Controller
  • RF Radio Frequency
  • the memory 103 is used to store a computer program.
  • the processor 101 can be configured to run the transfer device 105 to call a computer program stored by the memory 103 to perform the following steps:
  • the target parameter is used to control the control system to output media information to the virtual reality device
  • the virtual reality device is connected to the control system
  • the media information and the moving information of the target object moving in the real scene are
  • the movement information includes a first distance.
  • the processor 101 is further configured to: acquire a first image disparity between the first sub-image information and the second sub-image information in the first image information, wherein the first sub-image information is targeted by the first camera Obtaining the object, the second sub-image information is obtained by the second camera, and the first camera and the second camera are deployed in the target imaging device, and the first image disparity is used to represent the target object indicated by the first sub-image information. a difference between the first image and the second image of the target object indicated by the second sub-image information; acquiring a first distance corresponding to the first image disparity in the target correspondence table.
  • the processor 101 is further configured to: acquire a first midpoint lateral coordinate in the first sub-image information, wherein the first midpoint lateral coordinate is a lateral coordinate of a center point of the first image in the target coordinate system; Obtaining a second midpoint lateral coordinate in the second sub-image information, wherein the second midpoint lateral coordinate is a lateral coordinate of the center point of the second image in the target coordinate system; the first midpoint lateral coordinate and the second middle The difference between the point lateral coordinates is determined as the first image disparity.
  • the processor 101 is further configured to: search for a target image disparity with a smallest difference between the first image disparity in the target correspondence table; and determine, in the target correspondence table, a distance corresponding to the target image disparity as The first distance.
  • the processor 101 is further configured to: obtain the second image information of the target object by the target imaging device before acquiring the first distance corresponding to the first image disparity in the target correspondence table, where the target object and the target camera The distance between the devices is a first target distance; the second image disparity between the third sub-image information and the fourth sub-image information is acquired in the second image information, wherein the third sub-image information is targeted by the first camera Obtaining the object, the fourth sub-image information is obtained by the second camera, and the second image disparity is used to represent the target indicated by the third image and the fourth sub-image information indicated by the third sub-image information.
  • the fourth image of the object establishing a correspondence between the first target distance and the second image disparity in the target correspondence table; acquiring third image information of the target object by the target imaging device, wherein the target object and the target camera The distance between the devices is a second target distance, the second target distance is different from the first target distance; in the third image letter Obtaining a third image disparity between the fifth sub-image information and the sixth sub-image information, wherein the fifth sub-image information is obtained by the first camera, and the sixth sub-image information is obtained by the second camera Obtaining, the third image disparity is used to represent a difference between the fifth image of the target object indicated by the fifth sub-image information and the sixth image of the target object indicated by the sixth sub-image information; in the target correspondence table A correspondence relationship between the second target distance and the third image disparity is established.
  • the processor 101 is further configured to: acquire a third midpoint lateral coordinate in the third sub-image information, wherein the third midpoint lateral coordinate is a lateral coordinate of the center point of the third image in the target coordinate system; Obtaining a fourth midpoint lateral coordinate in the fourth sub-image information, wherein the fourth midpoint lateral coordinate is a lateral coordinate of the center point of the fourth image in the target coordinate system; the third midpoint lateral coordinate and the fourth middle
  • the processor 101 is further configured to: when the change state of the first distance indicates that the first distance becomes smaller, increase the sound parameter of the control system, wherein the target parameter includes a sound parameter, and the media information includes sound information,
  • the sound parameter is used to control the control system to output sound information to the virtual reality device; and when the change state of the first distance indicates that the first distance becomes large, the sound parameter of the control system is reduced.
  • the processor 101 is further configured to: adjust the light parameter of the control system to a first value when the change state of the first distance indicates that the first distance becomes smaller, so that the light of the control system is focused on the target camera Within the target area of the device, wherein the target parameter includes a light parameter, the media information includes light information, and the light parameter is used to control the control system to output the light information to the virtual reality device; the change state at the first distance indicates that the first distance becomes larger.
  • the lighting parameters of the control system are adjusted to a second value such that the lights of the control system are focused outside of the target area of the target imaging device.
  • the processor 101 is further configured to: acquire a first distance corresponding to the first image information by using a computing center device in the control system, wherein the computing center device is connected to the target imaging device; and the controller in the control system Receiving a first distance sent by the computing center device, and adjusting a target parameter of the control system according to the first distance.
  • the input/output device 107 of the embodiment is connected to the virtual reality device, and the processor 101 may output media information corresponding to the target parameter to the virtual reality device through the input/output device 107, for example, outputting sound information and lighting information. Wait.
  • the above input and output device 107 includes, but is not limited to, an audio device for outputting sound information, and further includes a lighting device for outputting light information, and including other devices for outputting media information.
  • the functions implemented by the input/output device 107 are only preferred implementations of the embodiments of the present invention, and may include other input and output functions of the control system, and any technical effect that can reduce the control cost of the control system is solved.
  • the input and output devices of the control system which are technically expensive to control are also within the scope of the present invention, and will not be exemplified herein.
  • the electronic device includes: a target imaging device, a computing center device, and a controller.
  • the processor 101 in the electronic device includes: a target imaging device. , computing center equipment and controllers.
  • the target image capturing device is configured to acquire first image information of a target object currently moving in a real scene; and the computing center device is configured to acquire a first distance corresponding to the first image information, where the computing center device and the target camera device
  • the controller is configured to receive the first distance sent by the computing center device, and adjust the target parameter of the control system according to the first distance, and the controller outputs the media information corresponding to the target parameter to the virtual reality device through the input/output device 107, For example, output sound information, lighting information, and the like.
  • the control system includes: a target imaging device, a computing center device, and a controller.
  • the electronic device acquires first image information of a target object currently moving in a real scene through a target imaging device of the control system; the electronic device acquires a first distance corresponding to the first image information by using a computing center device of the control system, where The central device may be connected to the target imaging device; the electronic device receives the first distance sent by the computing center device through the controller of the control system, and adjusts the target parameter of the control system according to the first distance, and then passes the input/output device 107 to the virtual reality device.
  • Output media information corresponding to the target parameter for example, output sound information, light information, and the like.
  • the first image information of the current target object is acquired by the target imaging device of the control system, wherein the target object moves in the real scene; and the first distance corresponding to the first image information is acquired, wherein the first distance is a distance between the target imaging device and the target object; adjusting a target parameter of the control system according to the first distance; and outputting media information corresponding to the target parameter.
  • the first distance between the target imaging device and the target object can be acquired through the first image information of the target object at low cost, and the target of the control system is adjusted according to the first distance.
  • control system outputs the media information corresponding to the movement information of the target object to the virtual reality device, thereby avoiding manual adjustment of the target parameter of the control system, and realizing control by the target parameter of the control system
  • the purpose of the control system outputting the media information to the virtual reality device achieves the technical effect of reducing the control cost of the control system, thereby solving the technical problem of the control cost of the related technology control system.
  • the structure shown in FIG. 10 is merely illustrative, and the electronic device can be a smart phone (such as an Android mobile phone, an iOS mobile phone, etc.), a tablet computer, a palm computer, and a mobile Internet device (MID). , PAD and other electronic devices.
  • FIG. 10 does not limit the structure of the above electronic device.
  • the electronic device may also include more or fewer components (such as a network interface, display device, etc.) than shown in FIG. 10, or have a different configuration than that shown in FIG.
  • Embodiments of the present invention also provide a storage medium.
  • a computer program is stored in the storage medium, wherein the computer program is set to be a processing method that can be used to execute the control system at runtime.
  • the foregoing storage medium may be located on at least one of the plurality of network devices in the network shown in the foregoing embodiment.
  • the storage medium is arranged to store program code for performing the following steps:
  • the target parameter is used to control the control system to output media information to the virtual reality device
  • the virtual reality device is connected to the control system
  • the media information and the moving information of the target object moving in the real scene are
  • the movement information includes a first distance.
  • the storage medium is further configured to store program code for: acquiring, in the first image information, a first image disparity between the first sub-image information and the second sub-image information, wherein the first The sub-image information is obtained by the first camera, and the second sub-image information is obtained by the second camera, and the first camera and the second camera are deployed in the target imaging device, and the first image disparity is used to represent the image. a difference between the first image of the target object indicated by the sub-image information and the second image of the target object indicated by the second sub-image information; acquiring a first distance corresponding to the disparity of the first image in the target correspondence table .
  • the storage medium is further configured to store program code for performing the step of: acquiring a first midpoint lateral coordinate in the first sub-image information, wherein the first midpoint lateral coordinate is a center point of the first image a lateral coordinate in the target coordinate system; acquiring a second midpoint lateral coordinate in the second sub-image information, wherein the second midpoint lateral coordinate is a lateral coordinate of the center point of the second image in the target coordinate system; The difference between the midpoint lateral coordinate and the second midpoint lateral coordinate is determined as the first image parallax.
  • the storage medium is further configured to store program code for performing the following steps: finding a target image disparity having the smallest difference from the disparity of the first image in the target correspondence table; in the target correspondence table The distance corresponding to the parallax of the target image is determined as the first distance.
  • the storage medium is further configured to store program code for performing: acquiring a second image of the target object by the target imaging device before acquiring the first distance corresponding to the first image disparity in the target correspondence table Information, wherein a distance between the target object and the target imaging device is a first target distance; and a second image parallax between the third sub-image information and the fourth sub-image information is acquired in the second image information, wherein the third The sub-image information is obtained by the first camera, and the fourth sub-image information is obtained by the second camera, and the second image disparity is used to represent the third image of the target object indicated by the third sub-image information.
  • a difference between the fourth image of the target object indicated by the fourth sub-image information establishing a correspondence between the first target distance and the second image disparity in the target correspondence table; acquiring the third image of the target object by the target imaging device Information, wherein the distance between the target object and the target imaging device is a second target distance, and the second target distance is not Obtaining a third image disparity between the fifth sub-image information and the sixth sub-image information in the third image information, wherein the fifth sub-image information is obtained by the first camera on the target object, The six-sub-image information is obtained by the second camera, and the third image disparity is used to represent the fifth image of the target object indicated by the fifth sub-image information and the sixth image of the target object indicated by the sixth sub-image information.
  • the difference between the second target distance and the third image parallax is established in the target correspondence table.
  • the storage medium is further configured to store program code for performing the step of: acquiring a third midpoint lateral coordinate in the third sub-image information, wherein the third midpoint lateral coordinate is a center point of the third image a lateral coordinate in the target coordinate system; acquiring a fourth midpoint lateral coordinate in the fourth sub-image information, wherein the fourth midpoint lateral coordinate is a lateral coordinate of the center point of the fourth image in the target coordinate system; The difference between the three midpoint lateral coordinates and the fourth midpoint lateral coordinate is determined as the second image disparity; and acquiring the third image disparity between the fifth sub image information and the sixth sub image information includes: acquiring the fifth sub image a fifth midpoint lateral coordinate in the information, wherein the fifth midpoint lateral coordinate is a lateral coordinate of the center point of the fifth image in the target coordinate system; and the sixth midpoint lateral coordinate in the sixth sub image information is acquired, wherein The sixth midpoint lateral coordinate is the lateral coordinate of the center point of the sixth image in the target coordinate system;
  • the storage medium is further configured to store program code for performing the step of increasing a sound parameter of the control system if the change state of the first distance indicates that the first distance becomes smaller, wherein the target parameter comprises The sound parameter, the media information includes sound information, and the sound parameter is used to control the control system to output the sound information to the virtual reality device; and when the change state of the first distance indicates that the first distance becomes larger, the sound parameter of the control system is reduced.
  • the storage medium is further configured to store program code for performing the following steps: adjusting the light parameter of the control system to a first value when the change state of the first distance indicates that the first distance becomes smaller, The light of the control system is focused on the target area of the target imaging device, wherein the target parameter includes a light parameter, the media information includes light information, and the light parameter is used to control the control system to output the light information to the virtual reality device; at the first distance The change state indicates that the first distance becomes larger, and the light parameter of the control system is adjusted to a second value such that the light of the control system is focused outside the target area of the target imaging device.
  • the storage medium is further configured to store program code for performing a step of: acquiring, by the computing center device in the control system, a first distance corresponding to the first image information, wherein the computing center device is associated with the target imaging device Connecting; adjusting the target parameter of the control system according to the first distance comprises: receiving, by the controller in the control system, the first distance sent by the computing center device, and adjusting the target parameter of the control system according to the first distance.
  • the foregoing storage medium may include, but not limited to, a USB flash drive, a read-only memory (ROM), a random access memory (RAM), a mobile hard disk, and a magnetic
  • ROM read-only memory
  • RAM random access memory
  • mobile hard disk a magnetic
  • magnetic A variety of media that can store program code, such as a disc or a disc.
  • the integrated unit in the above embodiment if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in the above-described computer readable storage medium.
  • the technical solution of the present invention may be embodied in the form of a software product in the form of a software product, or the whole or part of the technical solution, which is stored in a storage medium, including
  • the instructions are used to cause one or more computer devices (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the disclosed client may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the first image information of the target object currently moving in the real scene is acquired by the target imaging device in the control system; and the first distance corresponding to the first image information is acquired, wherein the first distance is the target a distance between the camera device and the target object; adjusting a target parameter of the control system according to the first distance, wherein the target parameter is used to control the control system to output media information to the virtual reality device, the virtual reality device is connected to the control system, and the media information is
  • the target object corresponds to the mobile information moving in the real scene, and the mobile information includes the first distance.
  • the first distance between the target imaging device and the target object can be acquired through the first image information of the target object at low cost, and the target of the control system is adjusted according to the first distance.
  • a parameter, by which the control system outputs the media information corresponding to the movement information of the target object to the virtual reality device, thereby avoiding manual adjustment of the target parameter of the control system, and realizing control by the target parameter of the control system The purpose of the control system outputting the media information to the virtual reality device achieves the technical effect of reducing the control cost of the control system, thereby solving the technical problem of the control cost of the related technology control system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Library & Information Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Optics & Photonics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Automation & Control Theory (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Studio Devices (AREA)

Abstract

一种控制系统的处理方法、装置、存储介质和电子装置,该方法包括:通过控制系统(104)中的目标摄像设备,获取当前在显示场景中移动的目标对象的第一图像信息;获取与第一图像信息对应的第一距离,其中,第一距离为目标摄像设备与目标对象之间的距离;按照第一距离调整控制系统的目标参数,其中,目标参数用于控制控制系统向虚拟现实设备(106)输出媒体信息,虚拟现实设备(106)与控制系统(104)相连接,媒体信息与目标对象在显示场景中移动的移动信息相对应,移动信息包括第一距离。该方法解决了控制系统的控制成本大的技术问题。

Description

控制系统的处理方法、装置、存储介质和电子装置
本申请要求于2017年11月3日提交中国专利局、优先权号为201711070963.9、发明名称为“控制系统的处理方法、装置、存储介质和电子装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明实施例涉及虚拟现实领域,具体而言,涉及一种控制系统的处理方法、装置、存储介质和电子装置。
背景技术
目前,演员在演出时,经常前后移动表演,通常需要工作人员手动调整控制系统的参数以保证演员的表演效果,比如,需要工作人员手动调整控制系统的追光设备的参数以动态调整光线的位置,来保证演员的面部光线良好,从而需要较高的人工成本。
另外,相关技术在双目摄像机的双目摄像头的基础上,额外配置了一个深度摄像头来获取深度信息,这样演员在通过双目摄像机进行线上直播时,用户可以使用虚拟现实(Virtual Reality,简称为VR)眼镜盒子观看演出以体验到与演员面对面的直播感觉。但是该方法带来的问题是,引入了新的摄像头,从而增加了控制系统的硬件成本。
相关技术还通过复杂的算法处理双目摄像头获得的图片,计算图像中每个像素点的深度信息。该方法虽然可以获取精确的深度信息,但是运算成本巨大,无法做到实时性。
针对上述的控制系统的控制成本大的问题,目前尚未提出有效的解决方案。
发明内容
本发明实施例提供了一种控制系统的处理方法、装置、存储介质和电子装置,以至少解决相关技术控制系统的控制成本大的技术问题。
根据本发明实施例的一个方面,提供了一种控制系统的处理方法。该方法包括:通过控制系统中的目标摄像设备,获取当前在现实场景中移动的目标对象的第一图像信息;获取与第一图像信息对应的第一距离,其中,第一距离为目标摄像设备与目标对象之间的距离;按照第一距离调整控制系统的目标参数,其中,目标参数用于控制控制系统向虚拟现实设备输出媒体信息,虚拟现实设备与控制系统相连接,媒体信息与目标对象在现实场景中移动的移动信息相对应,移动信息包括第一距离。
根据本发明实施例的另一方面,还提供了一种控制系统的处理装置,包括一个或多个处理器,以及一个或多个存储程序单元的存储器,其中,程序单元由处理器执行,该程序单元包括:第一获取单元,被设置为通过控制系统中的目标摄像设备,获取当前在现实场景中移动的目标对象的第一图像信息;第二获取单元,被设置为获取与第一图像信息对应的第一距离,其中,第一距离为目标摄像设备与目标对象之间的距离;调整单元,用于按照第一距离调整控制系统的目标参数,其中,目标参数用于控制控制系统向虚拟现实设备输出媒体信息,虚拟现实设备与控制系统相连接,媒体信息与目标对象在现实场景中移动的移动信息相对应,移动信息包括第一距离。
根据本发明实施例的另一方面,还提供了一种存储介质。该存储介质中存储有计算机程序,计算机程序被设置为运行时执行本发明实施例的方法。
根据本发明实施例的另一方面,还提供了一种电子装置。该电子装置该包括存储器和处理器,其中,存储器中存储有计算机程序,处理器被设置为运行计算机程序以执行本发明实施例的方法。
在本发明实施例中,通过控制系统中的目标摄像设备,获取当前在现实场景中移动的目标对象的第一图像信息;获取与第一图像信息对应的第一距离,其中,第一距离为目标摄像设备与目标对象之间的距离;按照第一距离调整控制系统的目标参数,其中,目标参数用于控制控制系统向虚拟现实设备输出媒体信息,虚拟现实设备与控制系统相连接,媒体信息与目标对象在现实场景中移动的移动信息相对应,移动信息包括第一距离。由于根据图像信息与距离之间的对应关系,可以低成本通过目标对象的第一图像信息获取到目标摄像设备与目标对象之间的第一距离,并进一步按照该第一距离调整了控制系统的目标参数,以通过该目标参数控制控制系统向虚拟现实设备输出与目标对象的移动信息相对应的媒体信息,从而避免了手动对控制系统的目标参数进行调整,实现了通过控制系统的目标参数来控制控制系统向虚拟现实设备输出媒体信息的目的,达到了降低控制系统的控制成本的技术效果,进而解决了相关技术控制系统的控制成本大的技术问题。
附图说明
此处所说明的附图用来提供对本发明实施例的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1是根据本发明实施例的一种控制系统的处理方法的硬件环境的示意图;
图2是根据本发明实施例的一种控制系统的处理方法的流程图;
图3是根据本发明实施例的一种控制系统的示意图;
图4是根据本发明实施例的一种控制系统的计算中心设备的处理方法的流程图;
图5是根据本发明实施例的一种视差与距离之间的对应关系的示意图;
图6是根据本发明实施例的一种通过虚拟眼镜体验控制系统的控制效果示意图;
图7是根据本发明实施例的另一种通过虚拟眼镜体验控制系统的控制效果示意图;
图8是根据本发明实施例的另一种通过虚拟眼镜体验控制系统的控制效果示意图;
图9是根据本发明实施例的一种控制系统的处理装置的示意图;以及
图10是根据本发明实施例的一种电子装置的结构框图。
具体实施方式
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
根据本发明实施例的一个方面,提供了一种控制系统的处理方法的实施例。
可选地,在本实施例中,上述控制系统的处理方法可以应用于如图1 所示的由服务器102、包括目标摄像设备的控制系统104和虚拟现实设备106所构成的硬件环境中。图1是根据本发明实施例的一种控制系统的处理方法的硬件环境的示意图。如图1所示,服务器102通过网络与控制系统104进行连接,上述网络包括但不限于:广域网、城域网或局域网,控制系统104并不限定于舞台控制系统等。本发明实施例的控制系统的处理方法可以由服务器102来执行,也可以由控制系统104来执行,还可以是由服务器102和控制系统104共同执行。其中,控制系统104执行本发明实施例的控制系统的处理方法也可以是由安装在其上的客户端来执行。上述虚拟现实设备106并不限定于:虚拟现实头盔、虚拟现实眼镜、虚拟现实一体机等,用于用户体验由控制系统向虚拟现实设备输出的媒体信息,该媒体信息不限定于声音信息、灯光信息等。
图2是根据本发明实施例的一种控制系统的处理方法的流程图。如图2所示,该方法可以包括以下步骤:
步骤S202,通过控制系统中的目标摄像设备,获取当前在现实场景中移动的目标对象的第一图像信息。
在本申请上述步骤S202提供的技术方案中,控制系统可以为演出舞台控制系统,用于控制舞台上演员的表演效果,或者控制直播室中演员的表演效果,其中,包括声音效果、灯光效果等控制效果。控制系统包括目标摄像设备,该目标摄像设备可以具有两个摄像头,也即,该目标摄像设备为双目摄像设备,比如,为包括左摄像头和右摄像头的双目摄像机、双目摄像头等。相比于单摄像设备而言,该实施例的双目摄像设备利用仿生学原理,通过标定后的双摄像头得到同步曝光图像,然后计算获取的二维图像像素点的三维深度信息。可选地,该实施例的现实场景包括演员在舞台上的表演场景。
该实施例通过上述目标摄像设备获取当前在现实场景中移动的目标对象的第一图像信息。该实施例的目标对象可以为现实场景中的演员、主播、物体等对象,其中,目标对象在现实场景中移动,也即,目标对象与 目标摄像设备之间的距离是变化的,通过目标摄像设备可以实时采集目标对象的第一图像信息,该第一图像信息可以为实时采集的目标对象的画面信息,包括通过双目摄像设备获取到的图片信息、视频信息等,还可以包括通过目标摄像设备中的左摄像头获取的图像的中点横向坐标和通过右摄像头获取的图像的中点横向坐标的差值,也即,该第一图像信息还包括图像视差,此处不做任何限制。
步骤S204,获取与第一图像信息对应的第一距离。
在本申请上述步骤S204提供的技术方案中,第一距离为目标摄像设备与目标对象之间的距离。
在通过控制系统的目标摄像设备获取当前目标对象的第一图像信息之后,获取与第一图像信息对应的第一距离,该第一距离为目标对象距离目标摄像设备的实时距离,可以为目标摄像设备对目标对象的深度信息。
可选地,该实施例从对应关系表中获取与第一图像信息对应的第一距离,该对应关系表包括预先建立的图像视差、目标对象与目标摄像头之间的距离的数据关系,可以根据第一图像信息中的图像视差从上述对应关系表中查找到当前时间对应的第一距离。
步骤S206,按照第一距离调整控制系统的目标参数。
在本申请上述步骤S206提供的技术方案中,在获取与第一图像信息对应的第一距离之后,按照第一距离调整控制系统的目标参数,其中,目标参数用于控制控制系统向虚拟现实设备输出媒体信息,虚拟现实设备与控制系统相连接,媒体信息与目标对象在现实场景中移动的移动信息相对应,移动信息包括第一距离。
该实施例的控制系统的目标参数可以为在舞台控制时需要达到某种表演效果的控制参数,用于控制控制系统向虚拟现实设备输出媒体信息。该目标参数可以包括控制系统的声音参数、灯光参数等,其中,声音参数用于控制控制系统向虚拟现实设备输出声音信息,灯光参数用于控制控制 系统向虚拟现实设备输出灯光信息。该实施例的虚拟现实设备与控制系统相连接,该虚拟现实设备可以映射控制系统获取的图像信息和媒体信息,可以为VR眼镜盒子,可以由观看表演的用户进行使用,来体验控制系统的控制效果。
可选地,在目标对象在现实场景中移动的过程中,按照目标对象与目标摄像设备之间的距离变化状态来调整控制系统的目标参数。可选地,调整控制系统的声音参数,比如,调整控制系统的声音强度(单位:dB),可选地,声音强度与距离平方成反比。当目标对象在现实场景移动过程中靠近目标摄像设备时,通过调整控制系统的声音参数使声音更大;当目标对象在现实场景移动过程中远离目标摄像设备时,通过调整控制系统的声音参数使声音更小,以使得用户在观看表演时听到的声音更加真实,从而为用户带来身临其境的体验。
需要说明的是,该实施例对控制系统调整声音参数的调整规则不做限制,任何可以实现当目标对象靠近目标摄像设备时声音更大,目标对象远离目标摄像设备时声音更小的调整规则都在本发明实施例的范围之内。
可选地,该实施例还可以调整控制系统的灯光参数来控制灯光,比如,当目标对象与目标摄像设备之间的距离减小时,控制灯光的云台自动将灯光聚焦处移动到距离目标摄像设备较近的区域;当目标对象与目标摄像设备之间的距离增大时,控制灯光的云台自动将灯光聚焦处移动到距离目标摄像设备较远的区域,这样可以实现控制聚光灯跟随目标对象前后移动,以保证灯光照在演员上的光线良好,从而为用户带来身临其境的体验。
需要说明的是,该实施例对调整灯光参数的调整规则不做具体限制,任何可以实现当目标对象靠近目标摄像设备时,灯光的云台自动将灯光聚焦处移动到距离目标摄像设备近的区域,当目标对象与目标摄像设备之间的距离增大时,控制灯光的云台自动将灯光聚焦处移动到距离目标摄像设备较远的区域的调整规则,都在本发明实施例的范围之内,此处不再一一举例说明。
该实施例的媒体信息与目标对象在现实场景中移动的移动信息相对应。该媒体信息包括声音信息、灯光信息等以及在舞台表演时用户观看时所需要的信息,其中,声音信息包括具有一定大小的声音信息,比如,当目标对象靠近目标摄像设备时,为声音变大的声音信息;当目标对象远离目标摄像设备时,为声音变小的声音信息。这样当用户通过虚拟现实设备进行体验时,比如,通过VR眼镜盒子进行观看时,当目标对象离用户更近的时候,用户所体验到的是听到的声音变大,当目标对象离用户更远的时候,用户所体验到的是听到的声音变小,从而使得用户在观看表演时听到的声音更加真实,为用户带来身临其境的体验。
可选地,当目标对象靠近目标摄像设备时,媒体信息为灯光的云台自动将灯光聚焦处移动到距离目标摄像设备近的区域时所呈现的光信息;当目标对象靠近目标摄像设备时,媒体信息为灯光的云台自动将灯光聚焦处移动到目标摄像设备远的区域时所呈现的光信息,从而保证了灯光照在演员上的光线良好,为用户带来身临其境的体验。
需要说明的是,该实施例除了上述媒体信息之外,还可以包括其它用于舞台控制的信息,此处不做任何限制。
通过上述步骤S202至步骤S206,通过控制系统中的目标摄像设备,获取当前在现实场景中移动的目标对象的第一图像信息;获取与第一图像信息对应的第一距离,该第一距离为目标摄像设备与目标对象之间的距离;按照第一距离调整控制系统的目标参数,该目标参数用于控制控制系统向虚拟现实设备输出媒体信息,虚拟现实设备与控制系统相连接,媒体信息与目标对象在现实场景中移动的移动信息相对应,移动信息包括第一距离。由于根据图像信息与距离之间的对应关系,可以低成本通过目标对象的第一图像信息获取到目标摄像设备与目标对象之间的第一距离,并按照该第一距离调整了控制系统的目标参数,以通过该目标参数控制控制系统向虚拟现实设备输出与目标对象的移动信息相对应的媒体信息,从而避免了手动对控制系统的目标参数进行调整,实现了通过控制系统的目标参数来控 制控制系统向虚拟现实设备输出媒体信息的目的,达到了降低控制系统的控制成本的技术效果,进而解决了相关技术控制系统的控制成本大的技术问题。
作为一种可选的实施例,步骤S204,获取与第一图像信息对应的第一距离包括:在第一图像信息中获取第一子图像信息和第二子图像信息之间的第一图像视差,其中,第一子图像信息由第一摄像机对目标对象摄像得到,第二子图像信息由第二摄像机对目标对象摄像得到,第一摄像机和第二摄像机部署在目标摄像设备中,第一图像视差用于表征第一子图像信息所指示的目标对象的第一图像和第二子图像信息所指示的目标对象的第二图像之间的差异;在目标对应关系表中获取与第一图像视差对应的第一距离。
在该实施例中,通过目标摄像设备获取到的图像信息包括图像视差,也即,第一图像信息包括第一图像视差,该第一图像视差用于表征第一子图像信息所指示的目标对象的第一图像和第二子图像信息所指示的目标对象的第二图像之间的差异,由第一子图像信息和第二子图像信息得到,比如,第一图像视差通过第一子图像的中点横向坐标和第二子图像的中点横向坐标之差得到。其中,第一子图像信息由目标摄像设备中的第一摄像机对目标对象摄像得到,该第一摄像机可以为双目摄像机的左摄像头,或者右摄像头;第二子图像信息由目标摄像设备中的第二摄像机对目标对象摄像得到,在第一摄像机为双目摄像机的左摄像头的情况下,该第二摄像设备可以为双目摄像机的右摄像头,在第一摄像机为双目摄像机的右摄像头的情况下,该第二摄像设备可以为双目摄像机的左摄像头。
该实施例的目标对应关系表为预先建立的图像视差与距离之间的关系的数据关系表,其中包括了图像视差与距离之间的对应关系,该图像视差用于表征通过目标摄像设备获取到的不同图像之间的差异,包括该实施例的第一图像视差,距离为目标摄像设备与目标对象之间的距离,包括该实施例的第一距离。通过该目标对应关系表,可以在图像视差确定的情况 下,根据图像视差以及图像视差与距离之间的对应关系来确定距离,从而达到快速、低成本地获取与图像视差对应的距离的目的。比如,目标对应关系表中存储了第一图像视差与第一距离之间具有对应关系,在获取上述第一子图像信息和第二子图像信息之间的第一图像视差之后,从目标对应关系表中获取与该第一图像视差对应的第一距离,从而在目标对应关系表中可以快速、低成本地获取与第一图像视差对应的第一距离,进而按照第一距离调整控制系统的目标参数,输出与目标参数对应的媒体信息,达到了降低控制系统的控制成本的技术效果,提升了用户体验。
可选地,该目标对应关系表除了存储第一图像视差与第一距离之间的对应关系之外,还可以预先存储其它更多的图像视差与距离之间的对应关系。可以在目标摄像设备与目标对象之间的距离确定的情况下,获取与该距离对应的图像视差,从而将距离与对应的图像视差存储在目标对应关系表中,比如,设定目标摄像设备与目标对象之间的距离为D1米,通过目标摄像设备获取距离目标摄像设备D1米的目标对象的图像信息,从该图像信息中获取图像视差,再将上述D1米和与D1米对应的图像视差存储在目标对应关系表中;再设定目标摄像设备与目标对象之间的距离为D2米,该D2米与D1米不同,通过目标摄像设备获取距离目标摄像设备D2米的目标对象的图像信息,从该图像信息中获取图像视差,再将上述D2米和与D2米对应的图像视差存储在目标对应关系表中,以建立目标对应关系表,从而通过上述方法将更多的距离与对应的图像视差存储在目标对应关系表中。
作为一种可选的实施例,在第一图像信息中获取第一子图像信息和第二子图像信息之间的第一图像视差包括:获取第一子图像信息中的第一中点横向坐标,其中,第一中点横向坐标为第一图像的中心点在目标坐标系下的横向坐标;获取第二子图像信息中的第二中点横向坐标,其中,第二中点横向坐标为第二图像的中心点在目标坐标系下的横向坐标;将第一中点横向坐标与第二中点横向坐标之间的差值确定为第一图像视差。
在该实施例中,图像信息包括中点横向坐标,该中点横向坐标为图像的中心点在目标坐标系中的横向坐标值。获取第一子图像信息中的第一中点横向坐标,比如,第一子图像信息为左摄像头采集到的画面信息,该画面信息可以为左人像,该左人像在目标坐标系中的中心点为(X1,Y1),则左人像的第一中点横向坐标为X1;获取第二子图像信息中的第二中点横向坐标,第二子图像信息可以为右摄像头采集到的画面信息,该画面信息可以为右人像,该右人像在目标坐标系中的中心点为(X2,Y2),则右人像的第二中点横向坐标为X2。在获取第一子图像信息中的第一中点横向坐标,第二子图像信息中的第二中点横向坐标之后,获取第一中点横向坐标和第二中点横向坐标之间的差值,将第一中点横向坐标与第二中点横向坐标之间的差值确定为上述第一图像视差,也即,可以将(X1-X2)确定为第一图像视差,进而获取与上述第一图像视差对应的第一距离,按照第一距离调整控制系统的目标参数,输出与目标参数对应的媒体信息,达到了降低控制系统的控制成本的技术效果,提升了用户体验。
作为一种可选的实施例,在目标对应关系表中获取与第一图像视差对应的第一距离包括:在目标对应关系表中查找与第一图像视差之间的差值最小的目标图像视差;在目标对应关系表中将与目标图像视差对应的距离确定为第一距离。
在该实施例中,在获取与第一图像视差对应的第一距离时,由于目标对应关系表为预先建立的图像视差与距离之间的数据关系,但是在实时计算图像视差时,计算得到的图像视差可能不在预先建立的对应关系表中。在上述情况下,可以在目标对应关系表中查找与第一图像视差之间的差值为最小差值的目标图像视差,该差值为绝对差值,也即,将目标对应关系表中查找与第一图像视差最接近的图像视差,将与第一图像视差最接近的图像视差确定为目标图像视差,进而在目标对应关系表中,将与目标图像视差对应的距离确定为第一距离,同样可以按照第一距离调整控制系统的目标参数,输出与目标参数对应的媒体信息,达到了降低控制系统的控制成本的技术效果,提升了用户体验。
作为一种可选的实施例,在目标对应关系表中获取与第一图像视差对应的第一距离之前,该方法还包括:通过目标摄像设备获取目标对象的第二图像信息,其中,目标对象与目标摄像设备之间的距离为第一目标距离;在第二图像信息中获取第三子图像信息和第四子图像信息之间的第二图像视差,其中,第三子图像信息由第一摄像机对目标对象摄像得到,第四子图像信息由第二摄像机对目标对象摄像得到,第二图像视差用于表征第三子图像信息所指示的目标对象的第三图像和第四子图像信息所指示的目标对象的第四图像之间的差异;在目标对应关系表中建立第一目标距离与第二图像视差的对应关系;通过目标摄像设备获取目标对象的第三图像信息,其中,目标对象与目标摄像设备之间的距离为第二目标距离,第二目标距离不同于第一目标距离;在第三图像信息中获取第五子图像信息和第六子图像信息之间的第三图像视差,其中,第五子图像信息由第一摄像机对目标对象摄像得到,第六子图像信息由第二摄像机对目标对象摄像得到,第三图像视差用于表征第五子图像信息所指示的目标对象的第五图像和第六子图像信息所指示的目标对象的第六图像之间的差异;在目标对应关系表中建立第二目标距离与第三图像视差的对应关系。
在该实施例中,在从目标对应关系表中,获取与第一图像视差对应的第一距离之前,建立目标对应关系表中的数据关系。通过目标摄像设备获取距离目标摄像设备第一目标距离的目标对象的第二图像信息,该第一目标距离为预先设定的距离,比如,为D1米,该第一目标距离为设定的基本距离,可以为在使用户具有三维(3D)体验时,目标摄像设备与目标对象之间的最远距离。将目标对象摆放在距离目标摄像设备第一目标距离的区域,通过目标摄像设备获取实时对目标对象进行拍摄得到的第二图像信息。在获取到第二图像信息之后,从第二图像信息中,获取第三子图像信息和第四子图像信息之间的第二图像视差,该第二图像视差用于表征第三子图像信息所指示的目标对象的第三图像和第四子图像信息所指示的目标对象的第四图像之间的差异,第二图像视差由第三子图像信息和第四子图像信息得到,比如,第二图像视差通过第三子图像的中点横向坐标和第 四子图像的中点横向坐标之差得到。其中,第三子图像信息由目标摄像设备中的第一摄像机对用于建立目标对应关系表中的目标对象摄像得到,该第一摄像机可以为双目摄像机的左摄像头;第四子图像信息由目标摄像设备中的第二摄像机对目标对象摄像得到,在第一摄像机为双目摄像机的左摄像头的情况下,该第二摄像设备可以为双目摄像机的右摄像头。在目标对应关系表中,建立上述第一目标距离与第二图像视差的对应关系。
可选地,通过目标摄像设备获取距离目标摄像设备第二目标距离的目标对象的第三图像信息,该第二目标距离为预先设定的距离,比如,为D2米,可以为相对第一目标距离变化的距离,比如,第二目标距离为相对第一距离变化5米的距离。将目标对象摆放在距离目标摄像设备第二目标距离的区域,通过目标摄像设备获取实时对目标对象进行拍摄得到的第三图像信息。在获取到第三图像信息之后,从第三图像信息中,获取第五子图像信息和第六子图像信息之间的第三图像视差,该第三图像视差用于表征第五子图像信息所指示的目标对象的第五图像和第六子图像信息所指示的目标对象的第六图像之间的差异,该第三图像视差由第五子图像信息和第六子图像信息得到,比如,第三图像视差通过第五子图像的中点横向坐标和第六子图像的中点横向坐标之差得到。其中,第五子图像信息由目标摄像设备中的第一摄像机对用于建立目标对应关系表中的目标对象摄像得到,该第一摄像机可以为双目摄像机的左摄像头;第六子图像信息由目标摄像设备中的第二摄像机对目标对象摄像得到,在第一摄像机为双目摄像机的左摄像头的情况下,该第二摄像设备可以为双目摄像机的右摄像头。在目标对应关系表中,建立上述第二目标距离与第三图像视差的对应关系。
该实施例可以不断改变目标摄像设备距离目标对象之间的距离,重复上述步骤,还可以在目标对应关系表中,建立第三目标距离与第四图像视差的第三对应关系等,依次类推,从而建立包括图像视差与距离之间的数据关系的目标对应关系表。
需要说明的是,该实施例建立包括图像视差与距离之间的数据关系的目标对应关系表的方法,仅为本发明实施例的优选实施方式,并不代表本发明建立目标对应关系表的方法仅为上述方法,任何可以建立目标对应关系表的方法都在本发明实施例的范围之内,此处不再一一举例说明。
作为一种可选的实施例,获取第三子图像信息和第四子图像信息之间的第二图像视差包括:获取第三子图像信息中的第三中点横向坐标,其中,第三中点横向坐标为第三图像的中心点在目标坐标系下的横向坐标;获取第四子图像信息中的第四中点横向坐标,其中,第四中点横向坐标为第四图像的中心点在目标坐标系下的横向坐标;将第三中点横向坐标与第四中点横向坐标之间的差值确定为第二图像视差;获取第五子图像信息和第六子图像信息之间的第三图像视差包括:获取第五子图像信息中的第五中点横向坐标,其中,第五中点横向坐标为第五图像的中心点在目标坐标系下的横向坐标;获取第六子图像信息中的第六中点横向坐标,其中,第六中点横向坐标为第六图像的中心点在目标坐标系下的横向坐标;将第五中点横向坐标与第六中点横向坐标之间的差值确定为第三图像视差。
在该实施例中,在建立目标对应关系表时,获取第三子图像信息中的第三中点横向坐标,比如,第三子图像信息为左摄像头采集到的画面信息,该画面信息可以为左人像,该左人像在目标坐标系中的中心点为(X3,Y3),则左人像的第三中点横向坐标为X3;获取第四子图像信息中的第四中点横向坐标,第四子图像信息可以为右摄像头采集到的画面信息,该画面信息可以为右人像,该右人像在目标坐标系中的中心点为(X4,Y4),则右人像的第四中点横向坐标为X4。在获取第三子图像信息中的第三中点横向坐标,第四子图像信息中的第四中点横向坐标之后,获取第三中点横向坐标和第四中点横向坐标之间的差值,将第三中点横向坐标与第四中点横向坐标之间的差值确定为上述第二图像视差,也即,将(X3-X4)确定为第二图像视差,进而在目标对应关系表中,建立上述第一目标距离与第二图像视差的对应关系。
可选地,获取第五子图像信息中的第五中点横向坐标,比如,第五子图像信息为左摄像头采集到的画面信息,可以为左人像,该左人像在目标坐标系中的中心点为(X5,Y5),则左人像的第五中点横向坐标为X5;获取第六子图像信息中的第六中点横向坐标,第六子图像信息可以为右摄像头采集到的画面信息,该画面信息可以为右人像,该右人像在目标坐标系中的中心点为(X6,Y6),则右人像的第六中点横向坐标为X6。在获取第五子图像信息中的第五中点横向坐标,第六子图像信息中的第六中点横向坐标之后,获取第五中点横向坐标和第六中点横向坐标之间的差值,将第五中点横向坐标与第六中点横向坐标之间的差值确定为上述第三图像视差,也即,将(X5-X6)确定为第三图像视差,进而在目标对应关系表中,建立上述第二目标距离与第三图像视差的对应关系。
需要说明的是,在目标对应关系表中,建立其它目标距离与图像视差之间的对应关系也可以通过上述方法进行,此处不再一一举例说明。
作为一种可选的实施例,获取第三子图像信息中的第三中点横向坐标包括以下至少之一:在第一图像信息为人脸的图像信息的情况下,将左眼图像的中心点在目标坐标系下的横向坐标和右眼图像的中心点在目标坐标系下的横向坐标之间的平均值,确定为第一中点横向坐标;在第一图像信息为人脸的图像信息的情况下,将鼻子图像的中心点在目标坐标系下的横向坐标确定为第一中点横向坐标;在第一图像信息为人像的图像信息的情况下,将左手图像的中心点在目标坐标系下的横向坐标和右手图像的中心点在目标坐标系下的横向坐标之间的平均值,确定为第一中点横向坐标;在第一图像信息为人像的图像信息的情况下,将左臂图像的中心点在目标坐标系下的横向坐标和右臂图像的中心点在目标坐标系下的横向坐标之间的平均值,确定为第一中点横向坐标。
在该实施例中,可以使用开源的人脸识别算法,得到目标摄像设备中的第一摄像机和第二摄像机采集的图像区,比如,使用开源的人脸识别算法,得到左摄像头与右摄像头的人像区,并且按照一定规则分别计算出第 三子图像信息的第三中点横向坐标,第四子图像信息中的第四中点横向坐标,分别得到左摄像头的横向坐标,右摄像头的横向坐标。上述规则可以是,但不限于以下规则:
在第二图像信息为人脸的图像信息的情况下,将人脸中的双目的像素坐标的平均坐标确定为第三中点横向坐标,也即,将人脸中,两只眼睛像素坐标值的平均值确定为第三中点横向坐标;在第二图像信息为人脸的图像信息的情况下,将人脸中的鼻子的像素坐标确定为第三中点横向坐标;在第二图像信息为人像的图像信息的情况下,将人像中的左手和右手的像素坐标的平均坐标确定为第三中点横向坐标,也即,将人像中,左手与右手的像素坐标值的平均值确定为第三中点横向坐标;在第二图像信息为人像的图像信息的情况下,将人像中的左臂和右臂的像素坐标值的平均值确定为第三中点横向坐标,也即,在人像中,将左胳膊与右胳膊的像素坐标值的平均值确定为第三中点横向坐标。
需要说明的是,该实施例所列举的第一中点横向坐标、第二中点横向坐标、第四中点横向坐标、第五中点横向坐标的获取方法都可以通过上述确定第三中点横向坐标的规则获取,此处不再一一举例说明。
作为一种可选的实施例,按照第一距离调整控制系统的目标参数包括:在第一距离的变化状态指示第一距离变小的情况下,增大控制系统的声音参数,其中,目标参数包括声音参数,媒体信息包括声音信息,声音参数用于控制控制系统向虚拟现实设备输出声音信息;在第一距离的变化状态指示第一距离变大的情况下,减小控制系统的声音参数。
该实施例的目标参数可以包括声音参数,在按照第一距离调整控制系统的目标参数时,在第一距离的变化状态指示第一距离变小的情况下,增大控制系统的声音参数,这样当用户通过虚拟眼镜进行体验时,如果目标对象离用户更近,用户听到的声音会更大,可选地,声音强度(单位:dB)与第一距离的平方成反比;在第一距离的变化状态指示第一距离变大的情况下,减小控制系统的声音参数,这样当目标对象离用户更远的时候,用 户听到的声音会更小。这样在演员使用双目摄像机进行拍摄时,控制系统也可以自动检测出目标对象到目标摄像设备之间的距离变化,并实时地调整控制系统向虚拟现实设备输出声音信息,也即,调整用户接收到的声音大小,从而使得用户在观看舞台表演时,体验到的场景更加真实,以模拟出用户与演员面对面直播或沟通的体验。
作为一种可选的实施例,按照第一距离调整控制系统的目标参数包括:在第一距离的变化状态指示第一距离变小的情况下,将控制系统的灯光参数调整为第一值,以使得控制系统的灯光聚焦于目标摄像设备的目标区域之内,其中,目标参数包括灯光参数,媒体信息包括灯光信息,灯光参数用于控制控制系统向虚拟现实设备输出灯光信息;在第一距离的变化状态指示第一距离变大的情况下,将控制系统的灯光参数调整为第二值,以使得控制系统的灯光聚焦于目标摄像设备的目标区域之外。
该实施例的目标参数可以包括灯光参数,在按照第一距离调整控制系统的目标参数时,在第一距离的变化状态指示第一距离变小的情况下,调整控制系统的灯光参数为第一值,第一值的灯光参数使控制系统的灯光聚焦于目标摄像设备的目标区域之内,这样当目标对象到目标摄像设备之间的距离减小时,比如,当物体/演员到摄像头的距离减小时,灯光的云台可以自动将灯光聚焦处移动到距离摄像头近的区域;在第一距离的变化状态指示第一距离变大的情况下,将控制系统的灯光参数调整为第二值,第二值的灯光参数使控制系统的灯光聚焦于目标摄像设备的目标区域之外,实现了聚光灯跟随演员前后移动而移动,以保证灯光照在演员上的光线良好。
作为一种可选的实施例,步骤S204,获取与第一图像信息对应的第一距离包括:通过控制系统中的计算中心设备获取与第一图像信息对应的第一距离,其中,计算中心设备与目标摄像设备相连接;按照第一距离调整控制系统的目标参数包括:通过控制系统中的控制器接收计算中心设备发送的第一距离,并按照第一距离调整控制系统的目标参数。
该实施例的控制系统包括目标摄像设备、计算中心设备和控制器。其 中,目标摄像设备可以为双目摄像机。该目标摄像设备用于实时采集第一图像信息,并将第一图像信息传输至计算中心设备,其中,第一图像信息可以为双目图片或视频,目标摄像设备与计算中心设备通过无线或者有线的方式相连,其中,无线方式可以为微波通信、红外线通信和激光通信等,有线方式可以为通用串行总线(Universal Serial Bus,简称为USB)、网线等,此处不做任何限制。计算中心设备获取与第一图像信息对应的第一距离,比如,计算中心设备用于完成对双目图片或视频的处理,得到目标对象距离目标摄像设备的第一距离,比如,建立图像视差与深度信息的数据关系,实时计算图像视差,获得实时深度数据,并向控制器发出控制指令。控制器接收计算中心设备发送的第一距离,触发控制器的控制逻辑,并按照第一距离调整控制系统的目标参数,通过控制器输出与目标参数对应的媒体信息。
该实施例利用目标摄像设备在不同距离拍摄同一目标对象的不同图像视差大小的原理,根据实测或光学模拟的方法,获得距离与对应的图像视差大小的对应关系,进而实时地、低成本地根据目标对象的图像信息中的图像视差计算出目标对象摄像头的距离,完成舞台灯光及音响的自动控制,达到了降低控制系统的控制成本的技术效果,提升了用户体验。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本发明所必须的。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如 ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。
下面结合优选的实施例对本发明的技术方案进行说明。
该实施例利用双目摄像头在不同距离,拍摄同一物体/用户的不同视差大小的原理,根据实测或光学模拟的方法,获得物体不同深度距离与对应的图像视差大小的对应关系,进而实时低成本计算出物体/用户距离摄像头的距离,完成舞台灯光及音响的自动控制。
下面对该实施例的基于双目摄像头的舞台控制系统的构成模块进行介绍。
图3是根据本发明实施例的一种控制系统的示意图。如图3所示,该控制系统包括:双目摄像机1、计算中心设备2和控制器3。
双目摄像机1,为舞台控制系统的第一模块,用于实时采集对象的双目图片或视频的信息,并传输到计算中心设备2。
计算中心设备2,为舞台控制系统的第二模块,用于完成对双目图片或视频的处理,得到用户距离摄像头的实时距离,并向控制器3发出控制指令。
控制器3,为舞台控制系统的第三模块,用于接收到计算中心设备2的控制指令,通过控制指令完成对音量控制器(音量控制)、灯光调节器(灯光控制)等其它控制。
下面对本发明实施例的双目摄像机进行介绍。
双目摄像机指通过两个摄像头实时采集画面,并传送给计算中心设备。双目摄像机与计算中心设备相连,可以与计算中心设备通过无线或者有线(如:USB、网线等)的方式相连,此处不做任何限制。
下面对本发明实施例的计算中心设备进行介绍。
图4是根据本发明实施例的一种控制系统的计算中心设备的处理方法的流程图。如图4所示,该方法包括以下步骤:
步骤S401,建立视差与深度信息的数据关系。
首先,将物体摆放在距离摄像头D1米(D1米是已知的数值)处的区域,计算中心设备获取到双目摄像机采集拍摄的实时画面(双目图片或视频)。可选地,计算中心设备使用开源的人脸识别算法,得到左摄像头与右摄像头的人像区,并且根据一定规则分别计算出左右人像的中点横向坐标,比如,左人像的中点横向坐标为左摄像头的x1,右人像的中点横向坐标为右摄像头的x2。
可选地,上述分别计算出左右人像的中点横向坐标的规则可以是,但不限于以下几种:
在人脸中,两只眼睛像素坐标和的平均值;
在人脸中,鼻子的像素坐标;
在人像中,左手与右手的像素坐标和的平均值;
在人像中,左胳膊与右胳膊的像素坐标和的平均值。
然后,计算左摄像头画面与右摄像头画面中,左人像的中点横向坐标x1与右人像的中点横向坐标x2的差值。
最后,不断改变摄像头到物体的距离D2(D2是已知的数值),计算左右人像中心像素点的横向坐标差值。
重复以上的步骤,即可获得在不同距离情况下,左人像的中点横向坐标x1与右人像的中点横向坐标x2的差值,并获得左右摄像机对同一物体视差与距离的对应关系。
图5是根据本发明实施例的一种视差与距离之间的对应关系的示意图。如图5所示,距离D1米,对应左摄像头画面的中点横向坐标X1和右摄像头画面的中点横向坐标X2的图像视差,其中,左人像中心点(X1,Y1), 右人像中心点(X2,Y2)。距离D2米,对应左摄像头画面的中点横向坐标X3和右摄像头画面的中点横向坐标X4的视差,其中,左人像中心点(X3,Y3),右人像中心点(X4,Y4)。
步骤S402,实时计算图像视差,获得实时深度数据。
首先,计算中心设备实时获取到双目摄像机传送的左右摄像机画面,基于以上的方法,可以获得实时的左人像的中点横向坐标X1与右人像的中点横向坐标X2的差值。
然后,根据左右摄像机同一物体的图像视差与距离的对应关系,反向查找,得出当下时间点物体到摄像头的距离。
下面对本发明实施例的控制器进行介绍。
计算中心设备将实时的物体/演员到摄像头的距离,发送给控制器,触发控制器的控制逻辑,其中包括以下逻辑:第一,与音量相关的逻辑,当物体/演员到摄像头的距离减小时,控制系统自动调大音量,调节的数值规则不在此处做限定(但可以遵循的原则是,声音强度(单位:dB)与距离平方成反比);第二,与灯光相关的逻辑,当物体/演员到摄像头的距离减小时,灯光的云台自动将灯光聚焦处移动到距离摄像头近的区域,调节的数值规则不在此处做限定。
本发明实施例的应用环境可以但不限于参照上述实施例中的应用环境,本实施例中对此不再赘述。本发明实施例提供了用于实施上述控制系统的处理方法的一种可选的具体应用。
图6是根据本发明实施例的一种通过虚拟眼镜体验控制系统的控制效果示意图。图7是根据本发明实施例的另一种通过虚拟眼镜体验控制系统的控制效果示意图。如图6至图7所示的显示画面可知,主播/演员的图像变小,离双目摄像头的距离的变化状态指示主播/演员与双目摄像头之间的距离变大,此时自动减小控制系统的声音参数,以使用户听到的声音变小;由图7至图6所示的显示画面可知,主播/演员的图像变大,主播/演员离 双目摄像头的距离的变化状态指示主播/演员与双目摄像头之间的距离变小,此时自动增大控制系统的声音参数,以使用户听到的声音变大,这样当主播/演员在使用双目摄像头进行实时的线上直播时,即使在没有助手的帮助下,主播/演员也可以一个人实现音响跟随自身位置的动态控制,从而使得用户听到的声音更真,进而保证用户通过虚拟眼镜观看直播时,有用户与演员面对面的类似体验。
图8是根据本发明实施例的另一种通过虚拟眼镜体验控制系统的控制效果示意图。如图7和图8所示,主播/演员的图像变大,主播/演员离双目摄像头的距离的变化状态指示距离变小,调整控制系统的灯光参数,调整后的灯光参数使控制系统的灯光聚焦于目标摄像设备的目标区域之内,这样当目标对象到目标摄像设备之间的距离减小时,灯光的云台可以自动将灯光聚焦处移动到距离摄像头近的区域;由图8至图7所示的显示画面可知,主播/演员的图像变小,主播/演员离双目摄像头的距离的变化状态指示距离变大,调整控制系统的灯光参数,调整后的灯光参数使控制系统的灯光聚焦于目标摄像设备的目标区域之外。这样主播/演员可以一个人实现灯光跟随自身位置的动态控制,实现了聚光灯会跟随演员前后移动而移动,从而使得用户看到的图像更真,进而保证用户通过虚拟眼镜观看直播时,有用户与演员面对面的类似体验。
该实施例在演员使用双目摄像机进行拍摄时,控制系统可以自动检测出演员到摄像头之间的距离变化,并实时调整用户接收到的声音大小,模拟出面对面直播或沟通的体验。当主播离用户更近的时候,声音会更大,当主播离用户更远的时候,声音会更小,使得用户体验到的声音更真;聚光灯会跟随演员前后移动而移动,以保证照在演员的光线良好,实现了光线追随的目的,以保证灯光照在演员上的光线良好。
根据本发明实施例的另一方面,还提供了一种用于实施上述控制系统的处理方法的控制系统的处理装置,包括一个或多个处理器,以及一个或多个存储程序单元的存储器,其中,程序单元由处理器执行,程序单元包 括第一获取单元、第二获取单元和调整单元。图9是根据本发明实施例的一种控制系统的处理装置的示意图。如图9所示,该装置可以包括:第一获取单元10、第二获取单元20和调整单元30。
第一获取单元10,被设置为通过控制系统中的目标摄像设备,获取当前在现实场景中移动的目标对象的第一图像信息。
第二获取单元20,被设置为获取与第一图像信息对应的第一距离,其中,第一距离为目标摄像设备与目标对象之间的距离。
调整单元30,被设置为按照第一距离调整控制系统的目标参数,其中,目标参数用于控制控制系统向虚拟现实设备输出媒体信息,虚拟现实设备与控制系统相连接,媒体信息与目标对象在现实场景中移动的移动信息相对应,移动信息包括第一距离。
此处需要说明的是,上述第一获取单元10、第二获取单元20和调整单元30可以作为装置的一部分运行在终端中,可以通过终端中的处理器来执行上述单元实现的功能,终端也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌声电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。
可选地,该实施例的第二获取单元20包括:第一获取模块、第二获取模块。其中,第一获取模块,被设置为在第一图像信息中获取第一子图像信息和第二子图像信息之间的第一图像视差,其中,第一子图像信息由第一摄像机对目标对象摄像得到,第二子图像信息由第二摄像机对目标对象摄像得到,第一摄像机和第二摄像机部署在目标摄像设备中,第一图像视差被设置为表征第一子图像信息所指示的目标对象的第一图像和第二子图像信息所指示的目标对象的第二图像之间的差异;第二获取模块,被设置为在目标对应关系表中获取与第一图像视差对应的第一距离。
此处需要说明的是,上述第一获取模块、第二获取模块可以作为装置的一部分运行在终端中,可以通过终端中的处理器来执行上述模块实现的 功能。
可选地,该实施例的第一获取模块包括:第一获取子模块、第二获取子模块和确定子模块。其中,第一获取子模块,被设置为获取第一子图像信息中的第一中点横向坐标,其中,第一中点横向坐标为第一图像的中心点在目标坐标系下的横向坐标;第二获取子模块,被设置为获取第二子图像信息中的第二中点横向坐标,其中,第二中点横向坐标为第二图像的中心点在目标坐标系下的横向坐标;确定子模块,被设置为将第一中点横向坐标与第二中点横向坐标之间的差值确定为第一图像视差。
此处需要说明的是,上述第一获取子模块、第二获取子模块和确定子模块可以作为装置的一部分运行在终端中,可以通过终端中的处理器来执行上述模块实现的功能。
需要说明的是,该实施例中的第一获取单元10可以被设置为执行本申请实施例中的步骤S202,该实施例中的第二获取单元20可以被设置为执行本申请实施例中的步骤S204,该实施例中的调整单元30可以被设置为执行本申请实施例中的步骤S206。
此处需要说明的是,上述单元和模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在如图1所示的硬件环境中,可以通过软件实现,也可以通过硬件实现。其中,硬件环境包括网络环境。
在该实施例中,第一获取单元10通过控制系统中的目标摄像设备,获取当前在现实场景中移动的目标对象的第一图像信息;通过第二获取单元20获取与第一图像信息对应的第一距离,其中,第一距离为目标摄像设备与目标对象之间的距离;通过调整单元30按照第一距离调整控制系统的目标参数,其中,目标参数用于控制控制系统向虚拟现实设备输出媒体信息,虚拟现实设备与控制系统相连接,媒体信息与目标对象在现实场景中移动的移动信息相对应,移动信息包括第一距离,达到降低控制系统的控制成本的技术效果,进而解决了相关技术控制系统的控制成本大的技 术问题。
根据本发明实施例的另一方面,还提供了一种用于实施上述控制系统的处理方法的电子装置。
作为一种可选的实施方式,该实施例的电子装置作为本发明实施例的控制系统的一部分,部署在控制系统中,比如,该电子装置部署在图1所示的控制系统104中,用于执行本发明实施例的控制系统的处理方法。而该控制系统与虚拟现实设备相连接,该虚拟现实设备包括但不限于虚拟现实头盔、虚拟现实眼镜、虚拟现实一体机等,用于接收控制系统输出的媒体信息,比如,接收控制系统输出的声音信息、灯光信息等。
作为另一种可选的实施方式,该实施例的电子装置可以作为单独的一部分,与本发明实施例的控制系统相连接,比如,该电子装置与图1所示的控制系统104相连接,用于通过控制系统执行本发明实施例的控制系统的处理方法。而该控制系统通过电子装置与虚拟现实设备相连接,该虚拟现实设备包括但不限于虚拟现实头盔、虚拟现实眼镜、虚拟现实一体机等,用于通过电子装置接收控制系统输出的媒体信息,比如,通过电子装置接收控制系统输出的声音信息、灯光信息等。
图10是根据本发明实施例的一种电子装置的结构框图。如图10所示,该电子装置可以包括:一个或多个(图中仅示出一个)处理器101、存储器103,其中,存储器103中可以存储有计算机程序,处理器101可以被设置为运行所述计算机程序以执行本发明实施例的控制系统的处理方法。
可选地,存储器103可用于存储计算机程序以及模块,如本发明实施例中的控制系统的处理方法和装置对应的程序指令/模块,处理器101被设置为通过运行存储在存储器103内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的控制系统的处理方法。存储器103可包括高速随机存储器,还可以包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器103可进一步包括相对于处理器101远程设置的存储器,这些远程存储器 可以通过网络连接至电子装置。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
可选地,如图10所示,该电子装置还可以包括:传输装置105以及输入输出设备107。其中,存传输装置105用于经由一个网络接收或者发送数据,还可以用于处理器与存储器之间的数据传输。上述的网络具体实例可包括有线网络及无线网络。在一个实例中,传输装置105包括一个网络适配器(Network Interface Controller,NIC),其可通过网线与其他网络设备与路由器相连从而可与互联网或局域网进行通讯。在一个实例中,存传输装置105为射频(Radio Frequency,RF)模块,其用于通过无线方式与互联网进行通讯。
其中,具体地,存储器103用于存储计算机程序。
处理器101可以被设置为运行传输装置105调用存储器103存储的计算机程序,以执行下述步骤:
通过控制系统中的目标摄像设备,获取当前在现实场景中移动的目标对象的第一图像信息;
获取与第一图像信息对应的第一距离,其中,第一距离为目标摄像设备与目标对象之间的距离;
按照第一距离调整控制系统的目标参数,其中,目标参数用于控制控制系统向虚拟现实设备输出媒体信息,虚拟现实设备与控制系统相连接,媒体信息与目标对象在现实场景中移动的移动信息相对应,移动信息包括第一距离。
处理器101还用于执行下述步骤:在第一图像信息中获取第一子图像信息和第二子图像信息之间的第一图像视差,其中,第一子图像信息由第一摄像机对目标对象摄像得到,第二子图像信息由第二摄像机对目标对象摄像得到,第一摄像机和第二摄像机部署在目标摄像设备中,第一图像视差用于表征第一子图像信息所指示的目标对象的第一图像和第二子图像 信息所指示的目标对象的第二图像之间的差异;在目标对应关系表中获取与第一图像视差对应的第一距离。
处理器101还用于执行下述步骤:获取第一子图像信息中的第一中点横向坐标,其中,第一中点横向坐标为第一图像的中心点在目标坐标系下的横向坐标;获取第二子图像信息中的第二中点横向坐标,其中,第二中点横向坐标为第二图像的中心点在目标坐标系下的横向坐标;将第一中点横向坐标与第二中点横向坐标之间的差值确定为第一图像视差。
处理器101还用于执行下述步骤:在目标对应关系表中查找与第一图像视差之间的差值最小的目标图像视差;在目标对应关系表中将与目标图像视差对应的距离确定为第一距离。
处理器101还用于执行下述步骤:在目标对应关系表中获取与第一图像视差对应的第一距离之前,通过目标摄像设备获取目标对象的第二图像信息,其中,目标对象与目标摄像设备之间的距离为第一目标距离;在第二图像信息中获取第三子图像信息和第四子图像信息之间的第二图像视差,其中,第三子图像信息由第一摄像机对目标对象摄像得到,第四子图像信息由第二摄像机对目标对象摄像得到,第二图像视差用于表征第三子图像信息所指示的目标对象的第三图像和第四子图像信息所指示的目标对象的第四图像之间的差异;在目标对应关系表中建立第一目标距离与第二图像视差的对应关系;通过目标摄像设备获取目标对象的第三图像信息,其中,目标对象与目标摄像设备之间的距离为第二目标距离,第二目标距离不同于第一目标距离;在第三图像信息中获取第五子图像信息和第六子图像信息之间的第三图像视差,其中,第五子图像信息由第一摄像机对目标对象摄像得到,第六子图像信息由第二摄像机对目标对象摄像得到,第三图像视差用于表征第五子图像信息所指示的目标对象的第五图像和第六子图像信息所指示的目标对象的第六图像之间的差异;在目标对应关系表中建立第二目标距离与第三图像视差的对应关系。
处理器101还用于执行下述步骤:获取第三子图像信息中的第三中点 横向坐标,其中,第三中点横向坐标为第三图像的中心点在目标坐标系下的横向坐标;获取第四子图像信息中的第四中点横向坐标,其中,第四中点横向坐标为第四图像的中心点在目标坐标系下的横向坐标;将第三中点横向坐标与第四中点横向坐标之间的差值确定为第二图像视差;获取第五子图像信息和第六子图像信息之间的第三图像视差包括:获取第五子图像信息中的第五中点横向坐标,其中,第五中点横向坐标为第五图像的中心点在目标坐标系下的横向坐标;获取第六子图像信息中的第六中点横向坐标,其中,第六中点横向坐标为第六图像的中心点在目标坐标系下的横向坐标;将第五中点横向坐标与第六中点横向坐标之间的差值确定为第三图像视差。
处理器101还用于执行下述步骤:在第一距离的变化状态指示第一距离变小的情况下,增大控制系统的声音参数,其中,目标参数包括声音参数,媒体信息包括声音信息,声音参数用于控制控制系统向虚拟现实设备输出声音信息;在第一距离的变化状态指示第一距离变大的情况下,减小控制系统的声音参数。
处理器101还用于执行下述步骤:在第一距离的变化状态指示第一距离变小的情况下,将控制系统的灯光参数调整为第一值,以使得控制系统的灯光聚焦于目标摄像设备的目标区域之内,其中,目标参数包括灯光参数,媒体信息包括灯光信息,灯光参数用于控制控制系统向虚拟现实设备输出灯光信息;在第一距离的变化状态指示第一距离变大的情况下,将控制系统的灯光参数调整为第二值,以使得控制系统的灯光聚焦于目标摄像设备的目标区域之外。
处理器101还用于执行下述步骤:通过控制系统中的计算中心设备获取与第一图像信息对应的第一距离,其中,计算中心设备与目标摄像设备相连接;通过控制系统中的控制器接收计算中心设备发送的第一距离,并按照第一距离调整控制系统的目标参数。
可选地,该实施例的输入输出设备107与虚拟现实设备相连接,处理 器101可以通过上述输入输出设备107向虚拟现实设备输出与目标参数对应的媒体信息,比如,输出声音信息、灯光信息等。
可选地,上述输入输出设备107包括但不限于音响设备,用于输出声音信息,还包括灯光设备,用于输出灯光信息,以及包括其它用于输出媒体信息的设备。
需要说明的是,上述输入输出设备107实现的功能仅为本发明实施例的优选实现方式,还可以包括控制系统的其它输入输出功能,任何可以达到降低控制系统的控制成本的技术效果,解决了控制系统的控制成本大的技术问题的输入输出设备,也都在本发明的保护范围之内,此处不再一一举例说明。
可选地,在本发明实施例的电子装置部署在控制系统中的情况下,电子装置包括:目标摄像设备、计算中心设备和控制器,比如,电子装置中的处理器101包括:目标摄像设备、计算中心设备和控制器。其中,目标摄像设备,用于获取当前在现实场景中移动的目标对象的第一图像信息;计算中心设备,用于获取与第一图像信息对应的第一距离,其中,计算中心设备与目标摄像设备相连接;控制器,用于接收计算中心设备发送的第一距离,并按照第一距离调整控制系统的目标参数,控制器通过输入输出设备107向虚拟现实设备输出与目标参数对应的媒体信息,比如,输出声音信息、灯光信息等。
可选地,在该实施例的电子装置与本发明实施例的控制系统相连接的情况下,该控制系统包括:目标摄像设备、计算中心设备和控制器。其中,电子装置通过控制系统的目标摄像设备获取当前在现实场景中移动的目标对象的第一图像信息;电子装置通过控制系统的计算中心设备获取与第一图像信息对应的第一距离,其中,计算中心设备可以与目标摄像设备相连接;电子装置通过控制系统的控制器接收计算中心设备发送的第一距离,并按照第一距离调整控制系统的目标参数,进而通过输入输出设备107向虚拟现实设备输出与目标参数对应的媒体信息,比如,输出声音信息、灯 光信息等。
采用本发明实施例,通过控制系统的目标摄像设备获取当前目标对象的第一图像信息,其中,目标对象在现实场景中移动;获取与第一图像信息对应的第一距离,其中,第一距离为目标摄像设备与目标对象之间的距离;按照第一距离调整控制系统的目标参数;输出与目标参数对应的媒体信息。由于根据图像信息与距离之间的对应关系,可以低成本通过目标对象的第一图像信息获取到目标摄像设备与目标对象之间的第一距离,并按照该第一距离调整了控制系统的目标参数,以通过该目标参数控制控制系统向虚拟现实设备输出与目标对象的移动信息相对应的媒体信息,从而避免了手动对控制系统的目标参数进行调整,实现了通过控制系统的目标参数来控制控制系统向虚拟现实设备输出媒体信息的目的,达到了降低控制系统的控制成本的技术效果,进而解决了相关技术控制系统的控制成本大的技术问题。
可选地,本实施例中的具体示例可以参考上述实施例中所描述的示例,本实施例在此不再赘述。
本领域普通技术人员可以理解,图10所示的结构仅为示意,电子装置可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌上电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等电子装置。图10其并不对上述电子装置的结构造成限定。例如,电子装置还可包括比图10中所示更多或者更少的组件(如网络接口、显示装置等),或者具有与图10所示不同的配置。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令电子装置相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。
本发明的实施例还提供了一种存储介质。可选地,在本实施例中,上 述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时可以用于执行控制系统的处理方法。
可选地,在本实施例中,上述存储介质可以位于上述实施例所示的网络中的多个网络设备中的至少一个网络设备上。
可选地,在本实施例中,存储介质被设置为存储用于执行以下步骤的程序代码:
通过控制系统中的目标摄像设备,获取当前在现实场景中移动的目标对象的第一图像信息;
获取与第一图像信息对应的第一距离,其中,第一距离为目标摄像设备与目标对象之间的距离;
按照第一距离调整控制系统的目标参数,其中,目标参数用于控制控制系统向虚拟现实设备输出媒体信息,虚拟现实设备与控制系统相连接,媒体信息与目标对象在现实场景中移动的移动信息相对应,移动信息包括第一距离。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:在第一图像信息中获取第一子图像信息和第二子图像信息之间的第一图像视差,其中,第一子图像信息由第一摄像机对目标对象摄像得到,第二子图像信息由第二摄像机对目标对象摄像得到,第一摄像机和第二摄像机部署在目标摄像设备中,第一图像视差用于表征第一子图像信息所指示的目标对象的第一图像和第二子图像信息所指示的目标对象的第二图像之间的差异;在目标对应关系表中获取与第一图像视差对应的第一距离。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:获取第一子图像信息中的第一中点横向坐标,其中,第一中点横向坐标为第一图像的中心点在目标坐标系下的横向坐标;获取第二子图像信息中的第二中点横向坐标,其中,第二中点横向坐标为第二图像的中心点在目标坐标系下的横向坐标;将第一中点横向坐标与第二中点横向坐标之间的差值 确定为第一图像视差。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:在目标对应关系表中查找与第一图像视差之间的差值最小的目标图像视差;在目标对应关系表中将与目标图像视差对应的距离确定为第一距离。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:在目标对应关系表中获取与第一图像视差对应的第一距离之前,通过目标摄像设备获取目标对象的第二图像信息,其中,目标对象与目标摄像设备之间的距离为第一目标距离;在第二图像信息中获取第三子图像信息和第四子图像信息之间的第二图像视差,其中,第三子图像信息由第一摄像机对目标对象摄像得到,第四子图像信息由第二摄像机对目标对象摄像得到,第二图像视差用于表征第三子图像信息所指示的目标对象的第三图像和第四子图像信息所指示的目标对象的第四图像之间的差异;在目标对应关系表中建立第一目标距离与第二图像视差的对应关系;通过目标摄像设备获取目标对象的第三图像信息,其中,目标对象与目标摄像设备之间的距离为第二目标距离,第二目标距离不同于第一目标距离;在第三图像信息中获取第五子图像信息和第六子图像信息之间的第三图像视差,其中,第五子图像信息由第一摄像机对目标对象摄像得到,第六子图像信息由第二摄像机对目标对象摄像得到,第三图像视差用于表征第五子图像信息所指示的目标对象的第五图像和第六子图像信息所指示的目标对象的第六图像之间的差异;在目标对应关系表中建立第二目标距离与第三图像视差的对应关系。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:获取第三子图像信息中的第三中点横向坐标,其中,第三中点横向坐标为第三图像的中心点在目标坐标系下的横向坐标;获取第四子图像信息中的第四中点横向坐标,其中,第四中点横向坐标为第四图像的中心点在目标坐标系下的横向坐标;将第三中点横向坐标与第四中点横向坐标之间的差值确定为第二图像视差;获取第五子图像信息和第六子图像信息之间的第三 图像视差包括:获取第五子图像信息中的第五中点横向坐标,其中,第五中点横向坐标为第五图像的中心点在目标坐标系下的横向坐标;获取第六子图像信息中的第六中点横向坐标,其中,第六中点横向坐标为第六图像的中心点在目标坐标系下的横向坐标;将第五中点横向坐标与第六中点横向坐标之间的差值确定为第三图像视差。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:在第一距离的变化状态指示第一距离变小的情况下,增大控制系统的声音参数,其中,目标参数包括声音参数,媒体信息包括声音信息,声音参数用于控制控制系统向虚拟现实设备输出声音信息;在第一距离的变化状态指示第一距离变大的情况下,减小控制系统的声音参数。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:在第一距离的变化状态指示第一距离变小的情况下,将控制系统的灯光参数调整为第一值,以使得控制系统的灯光聚焦于目标摄像设备的目标区域之内,其中,目标参数包括灯光参数,媒体信息包括灯光信息,灯光参数用于控制控制系统向虚拟现实设备输出灯光信息;在第一距离的变化状态指示第一距离变大的情况下,将控制系统的灯光参数调整为第二值,以使得控制系统的灯光聚焦于目标摄像设备的目标区域之外。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:通过控制系统中的计算中心设备获取与第一图像信息对应的第一距离,其中,计算中心设备与目标摄像设备相连接;按照第一距离调整控制系统的目标参数包括:通过控制系统中的控制器接收计算中心设备发送的第一距离,并按照第一距离调整控制系统的目标参数。
可选地,本实施例中的具体示例可以参考上述实施例中所描述的示例,本实施例在此不再赘述。
可选地,在本实施例中,上述存储介质可以包括但不限于:U盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介 质。
如上参照附图以示例的方式描述了根据本发明实施例的控制系统的处理方法、装置、存储介质和电子装置。但是,本领域技术人员应当理解,对于上述本发明实施例所提出的控制系统的处理方法、装置、存储介质和电子装置,还可以在不脱离本发明内容的基础上做出各种改进。因此,本发明实施例的保护范围应当由所附的权利要求书的内容确定。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本发明的技术方案本质上或者说对相关技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。
在本发明的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的客户端,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的 部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。
工业实用性
在本发明实施例中,通过控制系统中的目标摄像设备,获取当前在现实场景中移动的目标对象的第一图像信息;获取与第一图像信息对应的第一距离,其中,第一距离为目标摄像设备与目标对象之间的距离;按照第一距离调整控制系统的目标参数,其中,目标参数用于控制控制系统向虚拟现实设备输出媒体信息,虚拟现实设备与控制系统相连接,媒体信息与目标对象在现实场景中移动的移动信息相对应,移动信息包括第一距离。由于根据图像信息与距离之间的对应关系,可以低成本通过目标对象的第一图像信息获取到目标摄像设备与目标对象之间的第一距离,并按照该第一距离调整了控制系统的目标参数,以通过该目标参数控制控制系统向虚拟现实设备输出与目标对象的移动信息相对应的媒体信息,从而避免了手动对控制系统的目标参数进行调整,实现了通过控制系统的目标参数来控制控制系统向虚拟现实设备输出媒体信息的目的,达到了降低控制系统的控制成本的技术效果,进而解决了相关技术控制系统的控制成本大的技术问题。

Claims (15)

  1. 一种控制系统的处理方法,包括:
    通过控制系统中的目标摄像设备,获取当前在现实场景中移动的目标对象的第一图像信息;
    获取与所述第一图像信息对应的第一距离,其中,所述第一距离为所述目标摄像设备与所述目标对象之间的距离;
    按照所述第一距离调整所述控制系统的目标参数,其中,所述目标参数用于控制所述控制系统向虚拟现实设备输出媒体信息,所述虚拟现实设备与所述控制系统相连接,所述媒体信息与所述目标对象在所述现实场景中移动的移动信息相对应,所述移动信息包括所述第一距离。
  2. 根据权利要求1所述的方法,其中,获取与所述第一图像信息对应的第一距离包括:
    在所述第一图像信息中获取第一子图像信息和第二子图像信息之间的第一图像视差,其中,所述第一子图像信息由第一摄像机对所述目标对象摄像得到,所述第二子图像信息由第二摄像机对所述目标对象摄像得到,所述第一摄像机和所述第二摄像机部署在所述目标摄像设备中,所述第一图像视差用于表征所述第一子图像信息所指示的所述目标对象的第一图像和所述第二子图像信息所指示的所述目标对象的第二图像之间的差异;
    在目标对应关系表中获取与所述第一图像视差对应的所述第一距离。
  3. 根据权利要求2所述的方法,其中,在所述第一图像信息中获取所述第一子图像信息和所述第二子图像信息之间的第一图像视差包括:
    获取所述第一子图像信息中的第一中点横向坐标,其中,所述第一中点横向坐标为所述第一图像的中心点在目标坐标系下的横向坐 标;
    获取所述第二子图像信息中的第二中点横向坐标,其中,所述第二中点横向坐标为所述第二图像的中心点在所述目标坐标系下的横向坐标;
    将所述第一中点横向坐标与所述第二中点横向坐标之间的差值确定为所述第一图像视差。
  4. 根据权利要求2所述的方法,其中,在所述目标对应关系表中获取与所述第一图像视差对应的所述第一距离包括:
    在所述目标对应关系表中查找与所述第一图像视差之间的差值最小的目标图像视差;
    在所述目标对应关系表中将与所述目标图像视差对应的距离确定为所述第一距离。
  5. 根据权利要求2所述的方法,其中,在所述目标对应关系表中获取与所述第一图像视差对应的所述第一距离之前,所述方法还包括:
    通过所述目标摄像设备获取所述目标对象的第二图像信息,其中,所述目标对象与所述目标摄像设备之间的距离为第一目标距离;
    在所述第二图像信息中获取第三子图像信息和第四子图像信息之间的第二图像视差,其中,所述第三子图像信息由所述第一摄像机对所述目标对象摄像得到,所述第四子图像信息由所述第二摄像机对所述目标对象摄像得到,所述第二图像视差用于表征所述第三子图像信息所指示的所述目标对象的第三图像和所述第四子图像信息所指示的所述目标对象的第四图像之间的差异;
    在所述目标对应关系表中建立所述第一目标距离与所述第二图像视差的对应关系;
    通过所述目标摄像设备获取所述目标对象的第三图像信息,其中,所述目标对象与所述目标摄像设备之间的距离为第二目标距离,所述第二目标距离不同于所述第一目标距离;
    在所述第三图像信息中获取第五子图像信息和第六子图像信息之间的第三图像视差,其中,所述第五子图像信息由所述第一摄像机对所述目标对象摄像得到,所述第六子图像信息由所述第二摄像机对所述目标对象摄像得到,所述第三图像视差用于表征所述第五子图像信息所指示的所述目标对象的第五图像和所述第六子图像信息所指示的所述目标对象的第六图像之间的差异;
    在所述目标对应关系表中建立所述第二目标距离与所述第三图像视差的对应关系。
  6. 根据权利要求5所述的方法,其中,
    获取所述第三子图像信息和所述第四子图像信息之间的所述第二图像视差包括:获取所述第三子图像信息中的第三中点横向坐标,其中,所述第三中点横向坐标为所述第三图像的中心点在目标坐标系下的横向坐标;获取所述第四子图像信息中的第四中点横向坐标,其中,所述第四中点横向坐标为所述第四图像的中心点在所述目标坐标系下的横向坐标;将所述第三中点横向坐标与所述第四中点横向坐标之间的差值确定为所述第二图像视差;
    获取所述第五子图像信息和所述第六子图像信息之间的所述第三图像视差包括:获取所述第五子图像信息中的第五中点横向坐标,其中,所述第五中点横向坐标为所述第五图像的中心点在所述目标坐标系下的横向坐标;获取所述第六子图像信息中的第六中点横向坐标,其中,所述第六中点横向坐标为所述第六图像的中心点在所述目标坐标系下的横向坐标;将所述第五中点横向坐标与所述第六中点横向坐标之间的差值确定为所述第三图像视差。
  7. 根据权利要求3所述的方法,其中,获取所述第一子图像信息中的第一中点横向坐标包括以下至少之一:
    在所述第一图像信息为人脸的图像信息的情况下,将左眼图像的中心点在所述目标坐标系下的横向坐标和右眼图像的中心点在所述 目标坐标系下的横向坐标之间的平均值,确定为所述第一中点横向坐标;
    在所述第一图像信息为所述人脸的图像信息的情况下,将鼻子图像的中心点在所述目标坐标系下的横向坐标确定为所述第一中点横向坐标;
    在所述第一图像信息为人像的图像信息的情况下,将左手图像的中心点在所述目标坐标系下的横向坐标和右手图像的中心点在所述目标坐标系下的横向坐标之间的平均值,确定为所述第一中点横向坐标;
    在所述第一图像信息为所述人像的图像信息的情况下,将左臂图像的中心点在所述目标坐标系下的横向坐标和右臂图像的中心点在所述目标坐标系下的横向坐标之间的平均值,确定为所述第一中点横向坐标。
  8. 根据权利要求1至7中任意一项所述的方法,按照所述第一距离调整所述控制系统的所述目标参数包括:
    在所述第一距离的变化状态指示所述第一距离变小的情况下,增大所述控制系统的声音参数,其中,所述目标参数包括所述声音参数,所述媒体信息包括声音信息,所述声音参数用于控制所述控制系统向所述虚拟现实设备输出所述声音信息;
    在所述第一距离的变化状态指示所述第一距离变大的情况下,减小所述控制系统的所述声音参数。
  9. 根据权利要求1至7中任意一项所述的方法,其中,按照所述第一距离调整所述控制系统的所述目标参数包括:
    在所述第一距离的变化状态指示所述第一距离变小的情况下,将所述控制系统的灯光参数调整为第一值,以使得所述控制系统的灯光聚焦于所述目标摄像设备的目标区域之内,其中,所述目标参数包括所述灯光参数,所述媒体信息包括灯光信息,所述灯光参数用于控制 所述控制系统向所述虚拟现实设备输出所述灯光信息;
    在所述第一距离的变化状态指示所述第一距离变大的情况下,将所述控制系统的所述灯光参数调整为第二值,以使得所述控制系统的灯光聚焦于所述目标摄像设备的所述目标区域之外。
  10. 根据权利要求1至7中任意一项所述的方法,其中,
    获取与所述第一图像信息对应的所述第一距离包括:通过所述控制系统中的计算中心设备获取与所述第一图像信息对应的所述第一距离,其中,所述计算中心设备与所述目标摄像设备相连接;
    按照所述第一距离调整所述控制系统的所述目标参数包括:通过所述控制系统中的控制器接收所述计算中心设备发送的所述第一距离,并按照所述第一距离调整所述控制系统的所述目标参数。
  11. 一种控制系统的处理装置,包括一个或多个处理器,以及一个或多个存储程序单元的存储器,其中,所述程序单元由所述处理器执行,所述程序单元包括:
    第一获取单元,被设置为通过控制系统中的目标摄像设备,获取当前在现实场景中移动的目标对象的第一图像信息;
    第二获取单元,被设置为获取与所述第一图像信息对应的第一距离,其中,所述第一距离为所述目标摄像设备与所述目标对象之间的距离;
    调整单元,被设置为按照所述第一距离调整所述控制系统的目标参数,其中,所述目标参数用于控制所述控制系统向虚拟现实设备输出媒体信息,所述虚拟现实设备与所述控制系统相连接,所述媒体信息与所述目标对象在所述现实场景中移动的移动信息相对应,所述移动信息包括所述第一距离。
  12. 根据权利要求11所述的装置,其中,所述第二获取单元包括:
    第一获取模块,被设置为在所述第一图像信息中获取第一子图像信息和第二子图像信息之间的第一图像视差,其中,所述第一子图像 信息由第一摄像机对所述目标对象摄像得到,所述第二子图像信息由第二摄像机对所述目标对象摄像得到,所述第一摄像机和所述第二摄像机部署在所述目标摄像设备中,所述第一图像视差用于表征所述第一子图像信息所指示的所述目标对象的第一图像和所述第二子图像信息所指示的所述目标对象的第二图像之间的差异;
    第二获取模块,被设置为在目标对应关系表中获取与所述第一图像视差对应的所述第一距离。
  13. 根据权利要求12所述的装置,其中,第一获取模块包括:
    第一获取子模块,被设置为获取所述第一子图像信息中的第一中点横向坐标,其中,所述第一中点横向坐标为所述第一图像的中心点在目标坐标系下的横向坐标;
    第二获取子模块,被设置为获取所述第二子图像信息中的第二中点横向坐标,其中,所述第二中点横向坐标为所述第二图像的中心点在所述目标坐标系下的横向坐标;
    确定子模块,被设置为将所述第一中点横向坐标与所述第二中点横向坐标之间的差值确定为所述第一图像视差。
  14. 一种存储介质,其中,所述存储介质中存储有计算机程序,所述计算机程序被设置为运行时执行所述权利要求1至10任一项中所述的方法。
  15. 一种电子装置,包括存储器和处理器,其中,所述存储器中存储有计算机程序,所述处理器被设置为运行所述计算机程序以执行所述权利要求1至10任一项中所述的方法。
PCT/CN2018/112047 2017-11-03 2018-10-26 控制系统的处理方法、装置、存储介质和电子装置 WO2019085829A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/594,565 US11275239B2 (en) 2017-11-03 2019-10-07 Method and apparatus for operating control system, storage medium, and electronic apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711070963.9A CN109752951B (zh) 2017-11-03 2017-11-03 控制系统的处理方法、装置、存储介质和电子装置
CN201711070963.9 2017-11-03

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/594,565 Continuation US11275239B2 (en) 2017-11-03 2019-10-07 Method and apparatus for operating control system, storage medium, and electronic apparatus

Publications (1)

Publication Number Publication Date
WO2019085829A1 true WO2019085829A1 (zh) 2019-05-09

Family

ID=66332805

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/112047 WO2019085829A1 (zh) 2017-11-03 2018-10-26 控制系统的处理方法、装置、存储介质和电子装置

Country Status (3)

Country Link
US (1) US11275239B2 (zh)
CN (1) CN109752951B (zh)
WO (1) WO2019085829A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111258312A (zh) * 2020-01-20 2020-06-09 深圳市商汤科技有限公司 可移动模型及其控制方法、装置、系统、设备和存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110535735B (zh) * 2019-08-15 2021-11-26 青岛海尔科技有限公司 物联网设备多媒体流的管理方法和装置
US10860098B1 (en) * 2019-12-30 2020-12-08 Hulu, LLC Gesture-based eye tracking
CN113467603B (zh) * 2020-03-31 2024-03-08 抖音视界有限公司 音频处理方法、装置、可读介质及电子设备
CN113562401B (zh) * 2021-07-23 2023-07-18 杭州海康机器人股份有限公司 控制目标对象传送方法、装置、系统、终端和存储介质
KR20230065049A (ko) * 2021-11-04 2023-05-11 삼성전자주식회사 비전 정보를 이용하여 전자기기를 제어하는 웨어러블 전자 장치 및 방법

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103487938A (zh) * 2013-08-28 2014-01-01 成都理想境界科技有限公司 头戴显示装置
US20150215192A1 (en) * 2008-06-05 2015-07-30 Gary Stephen Shuster Forum search with time-dependent activity weighting
CN106598229A (zh) * 2016-11-11 2017-04-26 歌尔科技有限公司 一种虚拟现实场景的生成方法、设备及虚拟现实系统
CN106713890A (zh) * 2016-12-09 2017-05-24 宇龙计算机通信科技(深圳)有限公司 一种图像处理方法及其装置

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4934580B2 (ja) * 2007-12-17 2012-05-16 株式会社日立製作所 映像音声記録装置および映像音声再生装置
US20130071013A1 (en) * 2011-03-01 2013-03-21 Shinsuke Ogata Video processing device, video processing method, program
US9532027B2 (en) * 2011-05-27 2016-12-27 Warner Bros. Entertainment Inc. Methods for controlling scene, camera and viewing parameters for altering perception of 3D imagery
US9268406B2 (en) * 2011-09-30 2016-02-23 Microsoft Technology Licensing, Llc Virtual spectator experience with a personal audio/visual apparatus
JP2014006674A (ja) * 2012-06-22 2014-01-16 Canon Inc 画像処理装置及びその制御方法、プログラム
US20150049079A1 (en) * 2013-03-13 2015-02-19 Intel Corporation Techniques for threedimensional image editing
CN106462178A (zh) * 2013-09-11 2017-02-22 谷歌技术控股有限责任公司 用于检测存在和运动的电子设备和方法
CN103617608B (zh) * 2013-10-24 2016-07-06 四川长虹电器股份有限公司 通过双目图像获得深度图的方法
US20150234206A1 (en) * 2014-02-18 2015-08-20 Aliphcom Configurable adaptive optical material and device
EP3207542A1 (en) * 2014-10-15 2017-08-23 Seiko Epson Corporation Head-mounted display device, method of controlling head-mounted display device, and computer program
CN105630336A (zh) * 2014-11-28 2016-06-01 深圳市腾讯计算机系统有限公司 音量控制方法和装置
CN104898276A (zh) * 2014-12-26 2015-09-09 成都理想境界科技有限公司 头戴式显示装置
CN106534707A (zh) * 2015-09-14 2017-03-22 中兴通讯股份有限公司 拍摄的方法及装置
JP6461850B2 (ja) * 2016-03-31 2019-01-30 株式会社バンダイナムコエンターテインメント シミュレーションシステム及びプログラム
GB2548860A (en) * 2016-03-31 2017-10-04 Nokia Technologies Oy Multi-camera image coding
CN105847578A (zh) * 2016-04-28 2016-08-10 努比亚技术有限公司 一种显示信息的参数调整方法及头戴式设备
CN105959595A (zh) * 2016-05-27 2016-09-21 西安宏源视讯设备有限责任公司 一种虚拟现实实时交互中虚拟对现实自主响应方法
CN106095235B (zh) * 2016-06-07 2018-05-08 腾讯科技(深圳)有限公司 基于虚拟现实的控制方法和装置
CN106157930A (zh) * 2016-06-30 2016-11-23 腾讯科技(深圳)有限公司 基于头戴式可视设备的亮度调节方法及装置
CN106225764A (zh) * 2016-07-01 2016-12-14 北京小米移动软件有限公司 基于终端中双目摄像头的测距方法及终端
JP6207691B1 (ja) * 2016-08-12 2017-10-04 株式会社コロプラ 情報処理方法および当該情報処理方法をコンピュータに実行させるためのプログラム
CN106303565B (zh) * 2016-08-12 2019-06-18 广州华多网络科技有限公司 视频直播的画质优化方法和装置
CN106572417B (zh) * 2016-10-27 2019-11-05 腾讯科技(深圳)有限公司 音效控制方法和装置
CN106843532A (zh) * 2017-02-08 2017-06-13 北京小鸟看看科技有限公司 一种虚拟现实场景的实现方法和装置
CN108693970B (zh) * 2017-04-11 2022-02-18 杜比实验室特许公司 用于调适可穿戴装置的视频图像的方法和设备
CN107105183A (zh) * 2017-04-28 2017-08-29 宇龙计算机通信科技(深圳)有限公司 录音音量调节方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150215192A1 (en) * 2008-06-05 2015-07-30 Gary Stephen Shuster Forum search with time-dependent activity weighting
CN103487938A (zh) * 2013-08-28 2014-01-01 成都理想境界科技有限公司 头戴显示装置
CN106598229A (zh) * 2016-11-11 2017-04-26 歌尔科技有限公司 一种虚拟现实场景的生成方法、设备及虚拟现实系统
CN106713890A (zh) * 2016-12-09 2017-05-24 宇龙计算机通信科技(深圳)有限公司 一种图像处理方法及其装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111258312A (zh) * 2020-01-20 2020-06-09 深圳市商汤科技有限公司 可移动模型及其控制方法、装置、系统、设备和存储介质
CN111258312B (zh) * 2020-01-20 2024-04-02 深圳市商汤科技有限公司 可移动模型及其控制方法、装置、系统、设备和存储介质

Also Published As

Publication number Publication date
US11275239B2 (en) 2022-03-15
US20200033599A1 (en) 2020-01-30
CN109752951B (zh) 2022-02-08
CN109752951A (zh) 2019-05-14

Similar Documents

Publication Publication Date Title
WO2019085829A1 (zh) 控制系统的处理方法、装置、存储介质和电子装置
US10171792B2 (en) Device and method for three-dimensional video communication
US11024083B2 (en) Server, user terminal device, and control method therefor
US10116922B2 (en) Method and system for automatic 3-D image creation
CN114527872B (zh) 虚拟现实交互系统、方法及计算机存储介质
WO2017215295A1 (zh) 一种摄像机参数调整方法、导播摄像机及系统
US9049423B2 (en) Zero disparity plane for feedback-based three-dimensional video
CN109997175B (zh) 确定虚拟对象的大小
WO2020042970A1 (zh) 一种三维建模的方法及其装置
US9392248B2 (en) Dynamic POV composite 3D video system
CN105429989A (zh) 虚拟现实设备的模拟旅游方法及系统
US20220067974A1 (en) Cloud-Based Camera Calibration
AU2017370476A1 (en) Virtual reality-based viewing method, device, and system
CN104021585A (zh) 基于真实场景的三维展示方法
JP2014501086A (ja) 立体画像取得システム及び方法
WO2019109323A1 (zh) 图像显示方法、穿戴式智能设备及存储介质
CN112105983A (zh) 增强的视觉能力
CN114363522A (zh) 拍照方法及相关装置
US11430178B2 (en) Three-dimensional video processing
CN116405653A (zh) 实时裸眼3d的图像处理方法、系统、设备及介质
WO2024116270A1 (ja) 携帯情報端末及び仮想現実表示システム
CN117319790A (zh) 基于虚拟现实空间的拍摄方法、装置、设备及介质
CN115914530A (zh) 一种视频会议控制方法、管理设备和存储介质
CN116193246A (zh) 用于拍摄视频的提示方法、装置、电子设备和存储介质
CN117670691A (zh) 图像处理方法及装置、计算设备、存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18874167

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18874167

Country of ref document: EP

Kind code of ref document: A1