CN111625210A - Large screen control method, device and equipment - Google Patents

Large screen control method, device and equipment Download PDF

Info

Publication number
CN111625210A
CN111625210A CN201910147561.7A CN201910147561A CN111625210A CN 111625210 A CN111625210 A CN 111625210A CN 201910147561 A CN201910147561 A CN 201910147561A CN 111625210 A CN111625210 A CN 111625210A
Authority
CN
China
Prior art keywords
image
large screen
camera
picture
controlled
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910147561.7A
Other languages
Chinese (zh)
Other versions
CN111625210B (en
Inventor
胡景翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision System Technology Co Ltd
Original Assignee
Hangzhou Hikvision System Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision System Technology Co Ltd filed Critical Hangzhou Hikvision System Technology Co Ltd
Priority to CN201910147561.7A priority Critical patent/CN111625210B/en
Publication of CN111625210A publication Critical patent/CN111625210A/en
Application granted granted Critical
Publication of CN111625210B publication Critical patent/CN111625210B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1446Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/02Recognising information on displays, dials, clocks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides a large screen control method, a device and equipment, wherein the method comprises the following steps: receiving a first operation instruction aiming at an image to be processed; acquiring a mapping relation between pixel points in an image to be processed and pixel points in a large screen to be controlled; converting the first operation instruction into a second operation instruction for the large screen to be controlled according to the mapping relation; controlling the large screen to be controlled based on the second operation instruction; therefore, according to the scheme, the large screen is controlled through the operation of the user in the image, and the convenience of control is improved.

Description

Large screen control method, device and equipment
Technical Field
The invention relates to the technical field of image processing, in particular to a large screen control method, device and equipment.
Background
The large screen is a spliced screen of a display, is composed of a plurality of displays and is also called a television wall. The pictures collected by a plurality of signal sources can be displayed in the large screen. In some scenarios, the content displayed on the large screen needs to be controlled, for example, the display position of the picture acquired by each signal source in the large screen is adjusted, or the display position of the picture acquired by each signal source in the large screen is rearranged after the signal sources are increased or decreased, and the like.
In the existing scheme, the display position of the picture acquired by each signal source in the large screen is generally fixed, the large screen can be controlled only in a reconfiguration mode, and the convenience is poor.
Disclosure of Invention
The embodiment of the invention aims to provide a large-screen control method, a large-screen control device and large-screen control equipment so as to improve control convenience.
In order to achieve the above object, an embodiment of the present invention provides a large screen control method, including:
receiving a first operation instruction for an image to be processed, wherein the image to be processed is obtained by image acquisition for a large screen to be controlled;
acquiring a mapping relation between pixel points in the image to be processed and pixel points in the large screen to be controlled;
converting the first operation instruction into a second operation instruction aiming at the large screen to be controlled according to the mapping relation;
and controlling the large screen to be controlled based on the second operation instruction.
Optionally, the obtaining of the mapping relationship between the pixel point in the image to be processed and the pixel point in the large screen to be controlled includes:
determining the distance between a camera for acquiring images of the large screen to be controlled and the large screen to be controlled as a first distance;
calculating a ratio of the first distance to the second distance, the second distance being: a distance of an imaging plane from a lens of the camera;
and determining the mapping relation between the pixel points in the image to be processed and the pixel points in the large screen to be controlled according to the ratio.
Optionally, the mapping relationship is:
A1=(u1-u2)*z/f;B1=(v1-v2)*z/f;
wherein (a1, B1) represents the coordinates of the pixel points in the large screen to be controlled, (u1, v1) represents the coordinates of the pixel points in the image to be processed, (u2, v2) represents the coordinates of the corner points of the large screen contained in the image to be processed, z represents the first distance, and f represents the second distance.
Optionally, the method further includes:
acquiring an image of a large screen to be controlled by a first camera in a binocular camera to obtain an image to be processed; wherein the first camera is a left camera or a right camera;
the determining a distance between a camera for acquiring an image of the large screen to be controlled and the large screen to be controlled as a first distance includes:
and calculating the distance between the first camera and the large screen to be controlled as a first distance according to the camera distance of the binocular cameras, the distance between an imaging plane and the lens of the first camera and the imaging point distance of the binocular cameras.
Optionally, the calculating, according to the camera distance of the binocular camera, the distance between the imaging plane and the lens of the first camera, and the imaging point distance of the binocular camera, the distance between the first camera and the large screen to be controlled as a first distance includes:
calculating the first distance using the following equation:
z ═ f × b/d; wherein z represents the first distance, f represents a distance between an imaging plane and a lens of the first camera, b represents a camera pitch of the binocular camera, and d represents an imaging point pitch of the binocular camera.
Optionally, the method further includes:
acquiring an image of a large screen to be controlled by a first camera in a binocular camera to obtain an image to be processed; wherein the first camera is a left camera or a right camera;
the mapping relationship comprises: a first partial mapping relationship and a second partial mapping relationship; the obtaining of the mapping relationship between the pixel points in the image to be processed and the pixel points in the large screen to be controlled includes:
determining a mapping relation of pixel points in the image to be processed, which is mapped to a three-dimensional space coordinate system, as the first part mapping relation according to the camera distance of the binocular camera, the distance between an imaging plane and the lens of the first camera and the imaging point distance of the binocular camera; the point of the pixel point in the image to be processed, which is mapped to the three-dimensional space coordinate system, is positioned on the plane where the large screen to be controlled is positioned;
and determining the mapping relation between the point of the pixel point in the image to be processed, which is mapped to the three-dimensional space coordinate system, and the pixel point in the large screen to be controlled as the second part mapping relation.
Optionally, the determining, according to the camera distance of the binocular camera, the distance between the imaging plane and the lens of the first camera, and the imaging point distance of the binocular camera, a mapping relationship in which a pixel point in the image to be processed is mapped to a three-dimensional space coordinate system as the first partial mapping relationship includes:
determining coordinate values of pixel points in the image to be processed in a coordinate system with the first camera as an origin as offset coordinate values;
and determining a mapping relation of the offset coordinate value to a three-dimensional space coordinate system according to the camera distance of the binocular camera, the distance between an imaging plane and the lens of the first camera and the imaging point distance of the binocular camera, and taking the mapping relation as the first part mapping relation.
Optionally, the determining a coordinate value of a pixel point in the image to be processed in a coordinate system using the first camera as an origin as a shift coordinate value includes:
calculating the offset coordinate value of the pixel point in the image to be processed by using the following formula:
P-u-L/2, Q-v + W/2; wherein, (u, v) represents the coordinate value of the pixel point in the image coordinate system in the image to be processed, (P, Q) represents the offset coordinate value of the pixel point in the image to be processed, L represents the length of the image to be processed, and W represents the width of the image to be processed;
the first partial mapping relationship is:
z ═ f × b/d'; x is P X Z/f; y is Q Z/f; wherein (X, Y, Z) represents coordinate values in the three-dimensional space coordinate system, f represents a distance of an imaging plane from a lens of the first camera, b represents a camera pitch of the binocular camera, and d' represents an imaging point pitch of the binocular camera in a coordinate system with the first camera as an origin;
the method further comprises the following steps:
mapping pixel points in the image to be processed to the three-dimensional space coordinate system according to the first part mapping relation;
determining a first side vector and a second side vector of a large screen to be controlled in the three-dimensional space coordinate system, wherein the intersection point of the first side vector and the second side vector is the angular point of the large screen to be controlled;
the second partial mapping relationship is:
Figure BDA0001980510700000031
wherein (A, B) represents the coordinate value of the pixel point in the large screen to be controlled in the coordinate system of the large screen to be controlled,
Figure BDA0001980510700000032
representing a vector formed by pixel points and the corner points in the large screen to be controlled in the three-dimensional space coordinate system, α representing
Figure BDA0001980510700000033
And β represents the angle between the target vector and the first edge vector.
Optionally, the first operation instruction is an instruction for adjusting a picture position, where the first operation instruction includes a first pre-adjustment position and a first post-adjustment position, and both the first pre-adjustment position and the first post-adjustment position are positions in the image to be processed;
the converting the first operation instruction into a second operation instruction for the large screen to be controlled according to the mapping relationship comprises:
converting the first position before adjustment into a second position before adjustment according to the mapping relation;
converting the first adjusted position into a second adjusted position according to the mapping relation to obtain a second operation instruction comprising the second pre-adjusted position and the second post-adjusted position, wherein the second pre-adjusted position and the second post-adjusted position are positions in the large screen to be controlled;
the controlling the large screen to be controlled based on the second operation instruction comprises the following steps:
determining a picture to be adjusted in the large screen to be controlled according to the second front adjustment position;
and adjusting the picture to be adjusted to the second adjusted position.
Optionally, the adjusting the to-be-adjusted picture to the second adjusted position includes:
exchanging the picture displayed at the second adjusted position with the picture to be adjusted;
or, the picture to be adjusted is adjusted to the second adjusted position by zooming the picture to be adjusted.
Optionally, the adjusting the to-be-adjusted picture to the second adjusted position by scaling the to-be-adjusted picture includes:
calculating a picture scaling factor according to the position relation between the second position before adjustment and the second position after adjustment;
and zooming the picture to be adjusted according to the picture zooming coefficient to obtain a zoomed picture, wherein the zoomed picture is positioned at the second adjusted position.
Optionally, the calculating a picture scaling factor according to the position relationship between the second pre-adjustment position and the second post-adjustment position includes:
calculating the size of the picture to be adjusted before adjustment according to the position relation between the second position before adjustment and the preset position in the picture to be adjusted;
calculating the adjusted size of the picture to be adjusted according to the position relation between the second adjusted position and the preset position in the picture to be adjusted;
and calculating a picture scaling coefficient of the adjusted size relative to the pre-adjusted size.
Optionally, the first operation instruction is a new picture display instruction, where the new picture display instruction includes a new picture identifier and a first display position of a new picture; the converting the first operation instruction into a second operation instruction for the large screen to be controlled according to the mapping relationship comprises:
converting the first display position into a second display position according to the mapping relation to obtain a second operation instruction comprising the new picture identifier and the second display position;
the controlling the large screen to be controlled based on the second operation instruction comprises the following steps:
and displaying the picture corresponding to the new picture identification at the second display position of the large screen to be controlled.
Optionally, the method further includes:
identifying the edge of each picture in the image to be processed as a first edge;
converting each first edge into an edge of the picture in the large screen to be controlled as a second edge according to the mapping relation;
the controlling the large screen to be controlled based on the second operation instruction comprises the following steps:
determining a picture to be adjusted corresponding to the second operation instruction according to each second edge;
and adjusting the picture to be adjusted according to the second operation instruction.
Optionally, the identifying an edge of each picture in the image to be processed as a first edge includes:
correcting a non-rectangular picture in the image to be processed into a rectangular picture to obtain a corrected image;
and identifying the edge of each picture in the corrected image as a first edge.
Optionally, the identifying an edge of each picture in the corrected image as a first edge includes:
identifying horizontal and vertical lines in the corrected image;
counting the number of pixel points included by each identified horizontal line and vertical line;
and selecting lines of which the number of the pixel points meets the preset condition as a first edge.
In order to achieve the above object, an embodiment of the present invention further provides a large screen control device, including:
the device comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving a first operation instruction aiming at an image to be processed, and the image to be processed is obtained by carrying out image acquisition aiming at a large screen to be controlled;
the acquisition module is used for acquiring the mapping relation between the pixel points in the image to be processed and the pixel points in the large screen to be controlled;
the first conversion module is used for converting the first operation instruction into a second operation instruction aiming at the large screen to be controlled according to the mapping relation;
and the control module is used for controlling the large screen to be controlled based on the second operation instruction.
Optionally, the obtaining module includes:
the first determining submodule is used for determining the distance between a camera for acquiring images of the large screen to be controlled and the large screen to be controlled as a first distance;
a calculation submodule, configured to calculate a ratio of the first distance to the second distance, where the second distance is: a distance of an imaging plane from a lens of the camera;
and the second determining submodule is used for determining the mapping relation between the pixel points in the image to be processed and the pixel points in the large screen to be controlled according to the ratio.
Optionally, the mapping relationship is:
A1=(u1-u2)*z/f;B1=(v1-v2)*z/f;
wherein (a1, B1) represents the coordinates of the pixel points in the large screen to be controlled, (u1, v1) represents the coordinates of the pixel points in the image to be processed, (u2, v2) represents the coordinates of the corner points of the large screen contained in the image to be processed, z represents the first distance, and f represents the second distance.
Optionally, the apparatus further comprises:
the first obtaining module is used for carrying out image acquisition on a large screen to be controlled through a first camera in the binocular cameras to obtain an image to be processed; wherein the first camera is a left camera or a right camera;
the first determining submodule is specifically configured to:
and calculating the distance between the first camera and the large screen to be controlled as a first distance according to the camera distance of the binocular cameras, the distance between an imaging plane and the lens of the first camera and the imaging point distance of the binocular cameras.
Optionally, the first determining submodule is specifically configured to:
calculating the first distance using the following equation:
z ═ f × b/d; wherein z represents the first distance, f represents a distance between an imaging plane and a lens of the first camera, b represents a camera pitch of the binocular camera, and d represents an imaging point pitch of the binocular camera.
Optionally, the apparatus further comprises:
the second obtaining module is used for carrying out image acquisition on the large screen to be controlled through the first camera in the binocular cameras to obtain an image to be processed; wherein the first camera is a left camera or a right camera;
the mapping relationship comprises: a first partial mapping relationship and a second partial mapping relationship; the acquisition module includes:
the third determining submodule is used for determining a mapping relation of pixel points in the image to be processed, which is mapped to a three-dimensional space coordinate system, as the first part mapping relation according to the camera distance of the binocular camera, the distance between an imaging plane and the lens of the first camera and the imaging point distance of the binocular camera; the point of the pixel point in the image to be processed, which is mapped to the three-dimensional space coordinate system, is positioned on the plane where the large screen to be controlled is positioned;
and the fourth determining submodule is used for determining the mapping relation between the point of the pixel point in the image to be processed, which is mapped to the three-dimensional space coordinate system, and the pixel point in the large screen to be controlled, and the mapping relation is used as the second part mapping relation.
Optionally, the third determining submodule is specifically configured to:
determining coordinate values of pixel points in the image to be processed in a coordinate system with the first camera as an origin as offset coordinate values;
and determining a mapping relation of the offset coordinate value to a three-dimensional space coordinate system according to the camera distance of the binocular camera, the distance between an imaging plane and the lens of the first camera and the imaging point distance of the binocular camera, and taking the mapping relation as the first part mapping relation.
Optionally, the third determining sub-module is further configured to:
calculating the offset coordinate value of the pixel point in the image to be processed by using the following formula:
P-u-L/2, Q-v + W/2; wherein, (u, v) represents the coordinate value of the pixel point in the image coordinate system in the image to be processed, (P, Q) represents the offset coordinate value of the pixel point in the image to be processed, L represents the length of the image to be processed, and W represents the width of the image to be processed;
determining that the first partial mapping relationship is:
z ═ f × b/d'; x is P X Z/f; y is Q Z/f; wherein (X, Y, Z) represents coordinate values in the three-dimensional space coordinate system, f represents a distance of an imaging plane from a lens of the first camera, b represents a camera pitch of the binocular camera, and d' represents an imaging point pitch of the binocular camera in a coordinate system with the first camera as an origin;
mapping pixel points in the image to be processed to the three-dimensional space coordinate system according to the first part mapping relation;
determining a first side vector and a second side vector of a large screen to be controlled in the three-dimensional space coordinate system, wherein the intersection point of the first side vector and the second side vector is the angular point of the large screen to be controlled;
the fourth determining submodule is further configured to:
determining that the second partial mapping relationship is:
Figure BDA0001980510700000071
wherein (A, B) represents the coordinate value of the pixel point in the large screen to be controlled in the coordinate system of the large screen to be controlled,
Figure BDA0001980510700000072
representing a vector formed by pixel points and the corner points in the large screen to be controlled in the three-dimensional space coordinate system, α representing
Figure BDA0001980510700000073
And β represents the angle between the target vector and the first edge vector.
Optionally, the first operation instruction is an instruction for adjusting a picture position, where the first operation instruction includes a first pre-adjustment position and a first post-adjustment position, and both the first pre-adjustment position and the first post-adjustment position are positions in the image to be processed;
the first conversion module is specifically configured to:
converting the first position before adjustment into a second position before adjustment according to the mapping relation; converting the first adjusted position into a second adjusted position according to the mapping relation to obtain a second operation instruction comprising the second pre-adjusted position and the second post-adjusted position, wherein the second pre-adjusted position and the second post-adjusted position are positions in the large screen to be controlled;
the control module is specifically configured to:
determining a picture to be adjusted in the large screen to be controlled according to the second front adjustment position; and adjusting the picture to be adjusted to the second adjusted position.
Optionally, the control module is further configured to interchange the picture displayed at the second adjusted position with the picture to be adjusted;
or, the picture to be adjusted is adjusted to the second adjusted position by zooming the picture to be adjusted.
Optionally, the control module is further configured to calculate a picture scaling factor according to a position relationship between the second position before adjustment and the second position after adjustment; and zooming the picture to be adjusted according to the picture zooming coefficient to obtain a zoomed picture, wherein the zoomed picture is positioned at the second adjusted position.
Optionally, the control module is further configured to calculate a pre-adjustment size of the picture to be adjusted according to a position relationship between the second pre-adjustment position and a preset position in the picture to be adjusted; calculating the adjusted size of the picture to be adjusted according to the position relation between the second adjusted position and the preset position in the picture to be adjusted; and calculating a picture scaling coefficient of the adjusted size relative to the pre-adjusted size.
Optionally, the first operation instruction is a new picture display instruction, where the new picture display instruction includes a new picture identifier and a first display position of a new picture;
the first conversion module is specifically configured to: converting the first display position into a second display position according to the mapping relation to obtain a second operation instruction comprising the new picture identifier and the second display position;
the control module is specifically configured to: and displaying the picture corresponding to the new picture identification at the second display position of the large screen to be controlled.
Optionally, the apparatus further comprises:
the identification module is used for identifying the edge of each picture in the image to be processed as a first edge;
the second conversion module is used for converting each first edge into an edge of the picture in the large screen to be controlled as a second edge according to the mapping relation;
the control module is specifically configured to:
determining a picture to be adjusted corresponding to the second operation instruction according to each second edge; and adjusting the picture to be adjusted according to the second operation instruction.
Optionally, the identification module includes:
the correction submodule is used for correcting a non-rectangular picture in the image to be processed into a rectangular picture to obtain a corrected image;
and the identification submodule is used for identifying the edge of each picture in the corrected image as a first edge.
Optionally, the identification submodule is specifically configured to:
identifying horizontal and vertical lines in the corrected image;
counting the number of pixel points included by each identified horizontal line and vertical line;
and selecting lines of which the number of the pixel points meets the preset condition as a first edge.
In order to achieve the above object, an embodiment of the present invention further provides an electronic device, including a processor and a memory;
a memory for storing a computer program;
and the processor is used for realizing any large-screen control method when executing the program stored in the memory.
Optionally, the apparatus may further include:
the binocular camera is used for acquiring images of a large screen to be controlled through a first camera in the binocular camera to obtain images to be processed, and the images to be processed are sent to the processor; wherein the first camera is a left camera or a right camera.
Optionally, the apparatus may further include:
and the touch screen is used for displaying the image to be processed and receiving a first operation instruction of a user for the image to be processed.
In order to achieve the above object, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and the computer program, when executed by a processor, implements any of the above large-screen control methods.
When the embodiment of the invention is applied to large-screen control, a first operation instruction for an image to be processed is received; acquiring a mapping relation between pixel points in an image to be processed and pixel points in a large screen to be controlled; converting the first operation instruction into a second operation instruction for the large screen to be controlled according to the mapping relation; controlling the large screen to be controlled based on the second operation instruction; therefore, according to the scheme, the large screen is controlled through the operation of the user in the image, and the convenience of control is improved.
Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1a is a schematic flowchart of a large screen control method according to an embodiment of the present invention;
fig. 1b is a schematic view of a first display screen in an electronic device according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a triangulation principle provided by an embodiment of the present invention;
fig. 3 is a schematic diagram of a camera not capturing an image of a large screen in an alignment manner according to an embodiment of the present invention;
fig. 4a is a schematic view of a first display screen in a large screen according to an embodiment of the present invention;
FIG. 4b is a schematic diagram of a second display screen in a large screen according to an embodiment of the present invention;
FIG. 4c is a schematic diagram of a third display screen in a large screen according to an embodiment of the present invention;
FIG. 4d is a schematic diagram of a second display screen of the electronic device according to the embodiment of the invention;
FIG. 5 is a schematic diagram of an edge of a picture according to an embodiment of the present invention;
fig. 6 is a schematic flowchart of a second large screen control method according to an embodiment of the present invention;
fig. 7 is a schematic view of a scenario provided by an embodiment of the present invention;
fig. 8 is a third flowchart illustrating a large screen control method according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a large-screen control device according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of another electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to solve the foregoing technical problems, embodiments of the present invention provide a large screen control method, an apparatus, and a device, where the method and the apparatus may be applied to various electronic devices, such as a mobile phone, a tablet computer, and other handheld terminals, or may also be a notebook computer, a desktop computer, and the like, and are not limited specifically. The electronic device (execution main body, hereinafter referred to as the electronic device) executing the present scheme may have an image capturing function, or the electronic device may be connected to a camera.
First, a detailed description is given of a large screen control method provided by an embodiment of the present invention. Fig. 1a is a first flowchart of a large screen control method according to an embodiment of the present invention, where the first flowchart includes:
s101: a first operation instruction for an image to be processed is received. The image to be processed is obtained by carrying out image acquisition on the large screen to be controlled.
The electronic device can be connected with the large screen to be controlled in advance, for example, the connection can be established through various communication modes such as WLAN (wireless local Area network) and bluetooth, and the specific connection mode and the communication protocol are not limited. After the electronic equipment is connected with the large screen to be controlled, the large screen can be controlled by adopting the embodiment.
In one case, an application program may be installed in the electronic device in advance, and a virtual module corresponding to each signal source in the large screen is configured in the application program.
For example, if the electronic device has an image capturing function, the electronic device may capture an image for a large screen to be controlled. Or, the electronic equipment can be connected with a camera, and the camera acquires images for a large screen to be controlled and sends the acquired images to the electronic equipment.
In one case, the image acquired by the electronic device may be a video image. If the electronic equipment has the function of recording the video, the electronic equipment can record the video aiming at the large screen to be controlled. If the electronic equipment is connected with the camera, the camera records videos for the large screen to be controlled and sends each recorded frame of video image to the electronic equipment.
For convenience of description, an image to which the first operation instruction is directed is referred to as a to-be-processed image. For example, the electronic device may display the acquired image to a user, and the user may perform an operation on the displayed image, where the image targeted by the operation is the image to be processed, and the operation may be considered as a first operation instruction sent for the image to be processed. In one case, the electronic device may be provided with a touch screen, and the electronic device displays an image to a user on the touch screen, and the user operates the image with a finger. Or, the electronic device may also be a notebook computer or a desktop computer, the image is displayed by the electronic device through the display screen, and the user operates the image through a mouse or a touch panel.
One or more pictures can be displayed in the large screen, and the pictures can be pictures of various signal sources. Correspondingly, the image to be processed includes a large screen area, and one or more pictures can be displayed in the large screen area. As shown in fig. 1b, fig. 1b is a schematic diagram of an image to be processed, and fig. 1b includes a frame 1, a frame 2, and a frame 3, which may correspond to frames of signal sources in a large screen. The number of the display pictures in the large screen is not limited, and in addition, the pictures can be closely adjacent or separated by some distance, and the specific separation distance is not limited. The user can adjust the position of the display picture in the image to be processed, such as dragging, zooming in, zooming out and the like. Or, the user may also open a new screen and determine the position of the new screen in the image to be processed, that is, the first operation instruction may be an instruction to adjust the screen or an instruction to open the new screen and determine the position of the new screen, and specific content of the first operation instruction is not limited.
S102: and acquiring the mapping relation between the pixel points in the image to be processed and the pixel points in the large screen to be controlled.
The pixel points in the image to be processed are the coordinate points of the large screen to be controlled in the image coordinate system, and the pixel points in the large screen to be controlled are the coordinate points of the large screen to be controlled in the large screen coordinate system. And determining the mapping relation between the image coordinate system and the large-screen coordinate system, namely determining the mapping relation between the image coordinate system and the large-screen coordinate system.
In one embodiment, after receiving a first operation instruction for an image to be processed, a mapping relationship between a pixel point in the image to be processed and a pixel point in a large screen to be controlled may be determined. In the implementation mode, the determined mapping relation reflects the relation between the image and the large screen in the current state in real time, and the accuracy is high.
In another embodiment, the mapping relationship between the pixel point in the image and the pixel point in the large screen to be controlled may be predetermined, and after receiving the first operation instruction for the image to be processed, the predetermined mapping relationship is obtained as the mapping relationship between the pixel point in the image to be processed and the pixel point in the large screen to be controlled.
For example, if the electronic device performs video recording on a large screen to be controlled, the positions of the electronic device and the large screen are fixed or changed little; under the condition, after the electronic equipment records the video image, the mapping relation between the pixel points in the video image and the pixel points in the large screen to be controlled can be determined; the electronic equipment directly obtains the mapping relation after receiving the first operation instruction, and the mapping relation is used as the mapping relation between the pixel point in the image to be processed and the pixel point in the large screen to be controlled.
As another example, assuming that the camera performs video recording on the large screen to be controlled, the positions of the camera and the large screen are fixed or have small changes, and the camera sends a recorded video image to the electronic device; under the condition, after receiving the video image sent by the camera, the electronic equipment can determine the mapping relation between the pixel points in the video image and the pixel points in the large screen to be controlled; the electronic equipment directly obtains the mapping relation after receiving the first operation instruction, and the mapping relation is used as the mapping relation between the pixel point in the image to be processed and the pixel point in the large screen to be controlled.
In the embodiment, the mapping relation is determined in advance, and after the first operation instruction is received, the mapping relation is directly acquired without performing complex operation to determine the mapping relation, so that the waiting time of a user is reduced, and the sensitivity of the scheme is better.
As an embodiment, S102 may include: determining the distance between a camera for acquiring images of the large screen to be controlled and the large screen to be controlled as a first distance; calculating a ratio of the first distance to the second distance, the second distance being: a distance of an imaging plane from a lens of the camera; and determining the mapping relation between the pixel points in the image to be processed and the pixel points in the large screen to be controlled according to the ratio.
If the image to be processed is acquired by the electronic equipment, the first distance is the distance between the electronic equipment and the large screen; and if the image to be processed is acquired by a camera connected with the electronic equipment, the first distance is the distance between the camera and the large screen. For convenience of description, the following description will be made by taking a camera as an example.
The first distance may be determined in various ways, for example, a light source may be disposed in the camera, the light source emits structured light to irradiate the large screen, the structured light has certain texture characteristics, and then the distance between the camera and the large screen is determined according to the texture difference between the reflected light and the structured light. Alternatively, a light source and a photoreceptor may be provided in the camera, the light source emits light to illuminate the large screen, the photoreceptor receives light reflected from the large screen, and the distance between the camera and the large screen is calculated by calculating the time difference between the emitted light and the received light.
As an embodiment, a binocular camera may be disposed in the electronic device, or the electronic device may be connected to the binocular camera, so that the image to be processed may be captured by any one of the cameras (left camera or right camera) of the binocular camera, and for convenience of description, the camera capturing the image to be processed is referred to as a first camera.
In the embodiment, the image acquisition can be carried out on the large screen to be controlled through the first camera in the binocular camera to obtain an image to be processed; wherein the first camera is a left camera or a right camera. The way of determining the first distance is: and calculating the distance between the first camera and the large screen to be controlled as a first distance according to the camera distance of the binocular cameras, the distance between an imaging plane and the lens of the first camera and the imaging point distance of the binocular cameras.
Specifically, the first distance may be calculated by the following equation:
z ═ f × b/d; wherein z represents the first distance, f represents a distance between an imaging plane and a lens of the first camera, b represents a camera pitch of the binocular camera, and d represents an imaging point pitch of the binocular camera.
The distance f between the imaging plane and the lens and the camera distance b of the binocular camera belong to internal parameters of the binocular camera and can be obtained in advance. Referring to fig. 2, according to the principle of triangulation, z/f is x/xl is x-b/xr, where x represents a distance from the point P to the straight line corresponding to the left camera, x is xl z/f, xl represents a distance from the imaging point of the left camera to the straight line corresponding to the left camera, and xr represents a distance from the imaging point of the right camera to the straight line corresponding to the right camera. From the above formula, z ═ f × b/(xl-xr) can be derived, where xl-xr is the distance d between the left camera imaging point and the right camera imaging point, that is, z ═ f × b/d.
As described above, the distance f (second distance) between the imaging plane and the lens may be obtained in advance, and the ratio z/f of the first distance to the second distance is calculated; determining the mapping relation between the pixel points in the image to be processed and the pixel points in the large screen to be controlled as follows:
A1=(u1-u2)*z/f;B1=(v1-v2)*z/f;
the coordinates of the pixel points in the large screen to be controlled are represented by (A1, B1), (u1, v1) and (u2, v2) respectively, and the coordinates of the corner points of the large screen contained in the image to be processed are represented by (u2, v 2).
As described above, the pixel point in the image to be processed is the coordinate point of the large screen to be controlled in the image coordinate system, and the pixel point in the large screen to be controlled is the coordinate point of the large screen to be controlled in the large screen coordinate system; (a1, B1) are coordinate values in the large screen coordinate system, (u1, v1) and (u2, v2) are coordinate values in the image coordinate system; and (u2, v2) is a coordinate value of the corner point of the large screen in the image coordinate system.
The mapping relationship determined in the above can be understood as: and determining the mapping relation under the condition that the camera is just performing image acquisition on the large screen. If the camera captures an image of a large screen with an inclination angle, the large screen in the image captured by the camera is no longer rectangular as shown in fig. 3.
In the embodiment, the image acquisition can be carried out on the large screen to be controlled through the first camera in the binocular camera to obtain an image to be processed; wherein the first camera is a left camera or a right camera;
the mapping relationship comprises: a first partial mapping relationship and a second partial mapping relationship; s102 includes:
determining a mapping relation of pixel points in the image to be processed, which is mapped to a three-dimensional space coordinate system, as the first part mapping relation according to the camera distance of the binocular camera, the distance between an imaging plane and the lens of the first camera and the imaging point distance of the binocular camera; the point of the pixel point in the image to be processed, which is mapped to the three-dimensional space coordinate system, is positioned on the plane where the large screen to be controlled is positioned;
and determining the mapping relation between the point of the pixel point in the image to be processed, which is mapped to the three-dimensional space coordinate system, and the pixel point in the large screen to be controlled as the second part mapping relation.
In this embodiment, the pixel points in the image to be processed are mapped to the three-dimensional space coordinate system, and then mapped to the large screen to be controlled from the three-dimensional space coordinate system. In other words, the image coordinate system is mapped to the three-dimensional space coordinate system and then mapped to the large-screen coordinate system.
And after the pixel points in the image to be processed are mapped to the three-dimensional space coordinate system, the pixel points are positioned on the plane where the large screen in the real space is positioned. For example, mapping the pixel points in the image to be processed to the three-dimensional space coordinate system may be divided into two steps: firstly, determining coordinate values of pixel points in the image to be processed in a coordinate system with the first camera as an origin as offset coordinate values; and secondly, determining a mapping relation of the offset coordinate value mapped to a three-dimensional space coordinate system as the first part mapping relation according to the camera distance of the binocular camera, the distance between an imaging plane and the lens of the first camera and the imaging point distance of the binocular camera.
Specifically, in the first step, the image pixel point is shifted into a coordinate system using the first camera as an origin, as shown in fig. 3, it can be understood that the corner point 2 is shifted to a central point O (the first camera corresponds to a central point of the image collected by itself), that is, the coordinate point in the image coordinate system is moved to the lower right; assuming that the coordinate value of a pixel point 1 in the image to be processed in the image coordinate system is (u, v), and the coordinate value of the pixel point 1 in the coordinate system with the first camera as the origin is (P, Q), P is u-L/2, and Q is-v + W/2; l denotes a length of the image to be processed, and W denotes a width of the image to be processed.
In the second step, a coordinate system with the first camera as an origin is mapped to a three-dimensional space coordinate system, and still taking the pixel 1 as an example, assuming that the coordinate values of the pixel 1 in the three-dimensional space coordinate system are (X, Y, Z), referring to the triangulation principle in fig. 2, Z can be obtained as f b/d'; x is P X Z/f; y is Q Z/f; d' represents an imaging point interval of the binocular camera in a coordinate system with the first camera as an origin. FIG. 2 shows an XZ coordinate system, and a YZ coordinate system is the same and will not be described again.
Here, Z represents a Z-axis coordinate value, and Z in fig. 2 and Z in other contents represent a distance between the lens and the large screen.
After the first part of mapping relation is obtained, all the pixel points in the large screen area in the image to be processed can be mapped to the three-dimensional space coordinate system. Referring to fig. 3, the large screen area in the image to be processed is a trapezoid, for convenience of description, one side of the quadrangle is denoted as side 1, the other side of the quadrangle is denoted as side 2, and the side 1 and the side 2 intersect at the corner point 2. Assume that there is a point 3 on edge 1 and a point 4 on edge 2, and that point 3 and corner 2 form a vector 1 and point 4 and corner 2 form a vector 2.
And mapping the vector 1 into a three-dimensional space coordinate system to obtain a first edge vector, mapping the vector 2 into the three-dimensional space coordinate system to obtain a second edge vector, wherein the intersection point of the first edge vector and the second edge vector is the corner point 2 of the large screen to be controlled.
The second part of mapping relation from the three-dimensional space coordinate system to the large-screen coordinate system is as follows:
Figure BDA0001980510700000151
Figure BDA0001980510700000152
wherein (A, B) represents the coordinate value of the pixel point in the large screen to be controlled in the coordinate system of the large screen to be controlled,
Figure BDA0001980510700000153
representing the angle between the pixel point in the large screen to be controlled and the large screen to be controlled in the three-dimensional space coordinate systemVector of point composition, α denotes
Figure BDA0001980510700000154
And β represents the angle between the target vector and the first edge vector.
With reference to figure 3 of the drawings,
Figure BDA0001980510700000155
namely, the vector formed by the pixel point 1 and the angular point 2 is mapped to the vector in the three-dimensional space coordinate system.
The mapping process described above can be expressed as: (u, v) → (P, Q) → (X, Y, Z) → (a, B); wherein (u, v) represents coordinate values in an image coordinate system, (P, Q) represents coordinate values in a coordinate system with the first camera as an origin, (X, Y, Z) represents coordinate values in a three-dimensional space coordinate system, and (a, B) represents coordinate values in a large-screen coordinate system. The mapping relations are the mapping relations between the pixel points in the image to be processed and the pixel points in the large screen to be controlled.
S103: and converting the first operation instruction into a second operation instruction aiming at the large screen to be controlled according to the mapping relation.
As described above, the mapping relationship between the image pixel point and the large-screen pixel point is obtained in S102, and therefore, the operation of the user on the image can be converted into the operation on the large screen.
S104: and controlling the large screen to be controlled based on the second operation instruction.
In one embodiment, the first operation instruction is an instruction for adjusting a position of a picture, and the first operation instruction includes a first pre-adjustment position and a first post-adjustment position, and the first pre-adjustment position and the first post-adjustment position are positions in an image to be processed; in this case, S103 may include: converting the first position before adjustment into a second position before adjustment according to the mapping relation; converting the first adjusted position into a second adjusted position according to the mapping relation to obtain a second operation instruction comprising the second pre-adjusted position and the second post-adjusted position, wherein the second pre-adjusted position and the second post-adjusted position are positions in the large screen to be controlled;
s104 may include: determining a picture to be adjusted in the large screen to be controlled according to the second position before adjustment; and adjusting the picture to be adjusted to the second adjusted position.
Taking the example that the user operates the image on the touch screen, the position clicked first by the user can be regarded as a first pre-adjustment position, and the first pre-adjustment position is mapped into the large screen according to the mapping relation in S102 to obtain a second pre-adjustment position, wherein the picture at the second pre-adjustment position is the picture to be adjusted; the position of the user' S finger after moving can be regarded as a first adjusted position, the first adjusted position is mapped to the large screen according to the mapping relation in S102 to obtain a second adjusted position, and the picture to be adjusted is adjusted to the second adjusted position.
In one embodiment, the to-be-adjusted picture can be directly moved to the second adjusted position.
For example, referring to fig. 4a, assuming that the picture to be adjusted is the picture of the signal source 1, the size of the picture is kept unchanged when the picture is moved, and after the movement is finished, the center point of the picture, or the angular point, or the position of the designated pixel point of the picture is located at the second adjusted position.
In another embodiment, adjusting the to-be-adjusted screen to the second adjusted position may include: and exchanging the picture displayed at the second adjusted position with the picture to be adjusted.
For example, referring to fig. 4b, assuming that the picture to be adjusted is the picture of the signal source 1 and the second adjusted position is the picture of the signal source 5, in this embodiment, the picture of the signal source 1 and the picture of the signal source 5 may be interchanged, and after the interchange, the picture of the signal source 1 is located at the second adjusted position. Therefore, the picture to be adjusted is adjusted to the second adjusted position in the interchanging mode.
In another embodiment, adjusting the to-be-adjusted screen to the second adjusted position may include: and adjusting the picture to be adjusted to a second adjusted position by zooming the picture to be adjusted.
For example, referring to fig. 4c, assuming that the picture to be adjusted is the picture of the signal source 1, in the present embodiment, the picture of the signal source 1 is zoomed, and the zoomed picture of the signal source 1 is located at the second adjusted position. In one case, the frame scaling factor may be calculated according to a positional relationship between the second pre-adjustment position and the second post-adjustment position; and zooming the picture to be adjusted according to the picture zooming coefficient to obtain a zoomed picture, wherein the zoomed picture is positioned at the second adjusted position. That is, the screen content corresponding to the second pre-adjustment position is adjusted to the second post-adjustment position.
The picture scaling factor is how much the picture size is enlarged or reduced by a factor of two. For example, referring to fig. 4c, a corner of the picture to be adjusted may be fixed, and then the picture is scaled according to the picture scaling factor.
In one case, the pre-adjustment size of the picture to be adjusted may be calculated according to a position relationship between the second pre-adjustment position and a preset position in the picture to be adjusted; calculating the adjusted size of the picture to be adjusted according to the position relation between the second adjusted position and the preset position in the picture to be adjusted; and calculating a picture scaling coefficient of the adjusted size relative to the pre-adjusted size.
For example, the preset position in the picture to be adjusted may be a picture center point, or an angular point, or a position of a specified pixel point of the picture, and is not limited specifically. Assuming that the preset position in the picture to be adjusted is an upper left corner of the picture, assuming that the position before the second adjustment and the position after the second adjustment are both picture center points, still referring to fig. 4c, assuming that the picture to be adjusted is a picture of the signal source 1, the distance between the position before the second adjustment and the upper left corner of the picture in the picture of the signal source 1 is 5 cm, and the distance between the position after the second adjustment and the upper left corner of the picture is 10 cm, the scaling factor may be considered as 2. The adjustment process may be: and fixing the upper left corner point of the picture of the signal source 1, zooming according to the zooming coefficient, wherein the center point of the picture in the zoomed picture is positioned at the position after the second adjustment.
As another example, assuming that the preset position in the picture to be adjusted is a picture top left corner point, coordinates of the top left corner point in the large screen are (0, 0), assuming that both the position before the second adjustment and the position after the second adjustment are picture center points, coordinates of the position before the second adjustment in the large screen are (10, 10), coordinates of the position after the second adjustment in the large screen are (20, 20), and calculating the scaling factor to be 2 according to the coordinates of the position before the second adjustment and the position after the second adjustment in the large screen. The adjustment process may be: and fixing the upper left corner point of the picture of the signal source 1, zooming according to the zooming coefficient, wherein the center point of the picture in the zoomed picture is positioned at the position after the second adjustment.
In the above embodiment, the position of the signal source screen displayed on the large screen is adjusted by operating the electronic device.
In another embodiment, the first operation instruction is a new picture display instruction, which includes a new picture identifier and a first display position of a new picture; in this case, S103 may include: converting the first display position into a second display position according to the mapping relation to obtain a second operation instruction comprising the new picture identifier and the second display position;
s104 may include: and displaying the picture corresponding to the new picture identification at the second display position of the large screen to be controlled.
In this embodiment, the picture identifier of each signal source may be agreed in the electronic device and the large screen in advance; in this way, the user can input or select the new picture identification in the electronic equipment, and determine the display position of the new picture in the image to be processed. For the purpose of distinguishing the description, the display position of the new screen in the image is referred to as a first display position, and the display position of the new screen in the real large screen is referred to as a second display position.
And mapping the first display position to the large screen according to the mapping relation in the S102 to obtain a second display position. And displaying the picture of the signal source corresponding to the new picture identification at a second display position.
As described above, in one case, an application program may be installed in the electronic device in advance, and a virtual module corresponding to each signal source in the large screen is configured in the application program, for example, as shown in fig. 4d, an operation interface of the application program may show an image to be processed on the right side of the interface, and show a virtual module corresponding to each signal source on the left side of the interface. Assuming that a picture for displaying the signal source 6 needs to be newly added in the large screen (a picture for displaying the signal source 6 is not displayed before), the user may first click on the virtual module corresponding to the signal source 6 on the left side of the interface, then select a display position in the large screen area in the image to be processed on the right side of the interface, and click on the selected display position, which is referred to as a first display position for convenience of description.
The two-click operation can be regarded as a new picture display instruction, and the signal source identifier (signal source 6) corresponding to the virtual module clicked by the user can be regarded as a new picture identifier. And according to the mapping relation determined in the step S102, converting the first display position clicked by the user into a second display position, wherein the second display position is a position in the large screen, and displaying the picture corresponding to the new picture identifier (the signal source 6) at the second display position. Thus, through the operation of the electronic equipment, the display of a new signal source picture in a large screen is realized.
Therefore, the large-screen control based on AR (Augmented Reality) is realized through the embodiment, the operation of the user in the virtual image can be mapped to the large screen, and the interaction between the virtual world and the real world is realized.
As described above, in one aspect, a video recording can be performed on a large screen to be controlled, the electronic device can display each frame of recorded video image, and if a user operates the displayed video image, the electronic device can display whether the operation is fed back to the large screen in real time, so that whether the operation is effective or not can be timely fed back to the user, and the user experience is better.
For example, assuming that the electronic device performs video recording on a large screen to be controlled, in the recording process, a user performs an operation on a frame of video image, the operation is an instruction for adjusting the position of a picture, the electronic device converts the instruction into a second operation instruction for the large screen, and controls the large screen based on the second operation instruction; if the large screen is adjusted according to the second operation instruction, the image adjustment can be performed in the video image recorded by the electronic device in response to the large screen, and the user can know that the image adjustment instruction of the user is effective through the video image.
For another example, if the electronic device converts the adjustment screen position instruction into a second operation instruction for a large screen and controls the large screen based on the second operation instruction, but the large screen cannot be adjusted according to the second operation instruction due to a communication fault or other abnormal conditions, so that the electronic device can reflect that the large screen is not adjusted as expected in a video image recorded by the electronic device, and a user can know that the adjustment screen instruction of the user is invalid through the video image.
As an embodiment, an edge of each picture in the image to be processed may be identified as a first edge; then converting each first edge into an edge of the picture in the large screen to be controlled as a second edge according to the mapping relation; thus, S104 may include: determining a picture to be adjusted corresponding to the second operation instruction according to each second edge; and adjusting the picture to be adjusted according to the second operation instruction.
In some cases, if a large screen displays more pictures, the pictures are closely adjacent to each other, resulting in picture edges, as shown in fig. 5. In the embodiment, the edge of each picture in the image to be processed is identified to distinguish different pictures, so that which picture the operation instruction of the user is directed to can be determined more accurately.
There are many ways to identify the edge of the frame in the image to be processed, for example, a neural network model capable of locating the edge of the frame can be trained, and the model can be used to identify the edge of the frame in the image. For another example, the frame edges in the image may also be identified by manual calibration.
The neural network model belongs to the technical field of AI (AI), and therefore, in the scheme, large-screen control can be realized based on AR and AI, the mapping between an image and a large screen is AR, and the neural network model is used for identifying the picture edge in the image, namely AI.
For another example, if the camera is just performing image acquisition on a large screen, the pictures in the image to be processed are all rectangular pictures, and in this case, horizontal lines and vertical lines can be identified in the image to be processed; counting the number of pixel points included by each identified horizontal line and vertical line; and selecting lines of which the number of the pixel points meets the preset condition as a first edge.
Specifically, the picture can be processed through the convolutional layer of the neural network to obtain lines in the picture, the lines are refined through a canny operator, and then horizontal lines and vertical lines in the picture are identified. It is understood that if the frame is a rectangular frame, the frame edges are horizontal lines or vertical lines, but the horizontal lines or the vertical lines are not necessarily all frame edges. The number of the pixel points at the edge of the picture is greater than that of the pixel points of other horizontal lines or vertical lines, so that the line with the largest number of the pixel points can be identified in the horizontal lines or the vertical lines to be used as the edge of the picture.
If the camera fails to acquire the image on the large screen, the image in the image to be processed is no longer a rectangular image, and under the condition, the non-rectangular image in the image to be processed can be corrected into the rectangular image to obtain a corrected image; then, the edge of each picture in the corrected image is identified as a first edge.
Specifically, horizontal lines and vertical lines may be identified in the corrected image; counting the number of pixel points included by each identified horizontal line and vertical line; and selecting lines of which the number of the pixel points meets the preset condition as a first edge.
When the embodiment of the invention is applied to large-screen control, a first operation instruction aiming at an image to be processed is received; acquiring a mapping relation between pixel points in an image to be processed and pixel points in a large screen to be controlled; converting the first operation instruction into a second operation instruction for the large screen to be controlled according to the mapping relation; controlling the large screen to be controlled based on the second operation instruction; therefore, according to the scheme, the large screen is controlled through the operation of the user in the image, and the convenience of control is improved.
Fig. 6 is a schematic flowchart of a second flowchart of a large-screen control method according to an embodiment of the present invention, where the embodiment of fig. 6 mainly aims at a situation where a camera is performing image acquisition on a large screen, and the embodiment of fig. 6 includes:
s601: acquiring an image of a large screen to be controlled by a first camera in the binocular cameras; wherein the first camera is a left camera or a right camera.
In the embodiment of fig. 6, it is assumed that the execution subject is a handheld terminal in which a binocular camera is provided. Referring to fig. 7, a left camera or a right camera of a binocular camera performs image acquisition for a large screen to be controlled. For convenience of description, a camera that performs image acquisition on a large screen to be controlled is referred to as a first camera.
S602: a first operation instruction for an image to be processed is received.
For convenience of description, an image to which the first operation instruction is directed is referred to as a to-be-processed image. For example, the terminal may display the acquired image to a user, and the user may perform an operation on the displayed image, where the image targeted by the operation is the image to be processed, and the operation may be considered as a first operation instruction sent for the image to be processed. In one case, a touch screen may be disposed in the terminal, and the terminal displays an image to a user on the touch screen, and the user operates the image with a finger.
One or more pictures can be displayed in the large screen, and the pictures can be pictures of various signal sources. Correspondingly, the image to be processed includes a large screen area, and one or more pictures may also be displayed in the large screen area, as shown in fig. 1b, fig. 1b is a schematic diagram of the image to be processed, and fig. 1b includes a picture 1, a picture 2, and a picture 3, which may correspond to pictures of each signal source in the large screen. The number of the display pictures in the large screen is not limited, and in addition, the pictures can be closely adjacent or separated by some distance, and the specific separation distance is not limited. The user can adjust the position of the display picture in the image to be processed, such as dragging, zooming in, zooming out and the like. Or, the user may also open a new screen and determine the position of the new screen in the image to be processed, that is, the first operation instruction may be an instruction to adjust the screen or an instruction to open the new screen and determine the position of the new screen, and specific content of the first operation instruction is not limited.
S603: and calculating the distance between the first camera and the large screen to be controlled as a first distance according to the camera distance of the binocular cameras, the distance between the imaging plane and the lens of the first camera and the imaging point distance of the binocular cameras.
Specifically, the first distance may be calculated by the following equation:
z ═ f × b/d; wherein z represents the first distance, f represents a distance between an imaging plane and a lens of the first camera, b represents a camera pitch of the binocular camera, and d represents an imaging point pitch of the binocular camera.
The distance f between the imaging plane and the lens and the camera distance b of the binocular camera belong to internal parameters of the binocular camera and can be obtained in advance. Referring to fig. 2, according to the principle of triangulation, z/f is x/xl is x-b/xr, where x represents a distance from the point P to the straight line corresponding to the left camera, x is xl z/f, xl represents a distance from the imaging point of the left camera to the straight line corresponding to the left camera, and xr represents a distance from the imaging point of the right camera to the straight line corresponding to the right camera. From the above formula, z ═ f × b/(xl-xr) can be derived, where xl-xr is the distance d between the left camera imaging point and the right camera imaging point, that is, z ═ f × b/d.
S604: calculating a ratio of the first distance to a second distance, the second distance being: a distance of the imaging plane from a lens of the first camera.
S605: and determining the mapping relation between the pixel points in the image to be processed and the pixel points in the large screen to be controlled according to the ratio.
As described above, the distance f (second distance) between the imaging plane and the lens may be obtained in advance, and the ratio z/f of the first distance to the second distance is calculated; determining the mapping relation between the pixel points in the image to be processed and the pixel points in the large screen to be controlled as follows:
A1=(u1-u2)*z/f;B1=(v1-v2)*z/f;
the coordinates of the pixel points in the large screen to be controlled are represented by (A1, B1), (u1, v1) and (u2, v2) respectively, and the coordinates of the corner points of the large screen contained in the image to be processed are represented by (u2, v 2).
As described above, the pixel point in the image to be processed is the coordinate point of the large screen to be controlled in the image coordinate system, and the pixel point in the large screen to be controlled is the coordinate point of the large screen to be controlled in the large screen coordinate system; (a1, B1) are coordinate values in the large screen coordinate system, (u1, v1) and (u2, v2) are coordinate values in the image coordinate system; and (u2, v2) is a coordinate value of the corner point of the large screen in the image coordinate system.
S606: and converting the first operation instruction into a second operation instruction aiming at the large screen to be controlled according to the mapping relation.
S607: and controlling the large screen to be controlled based on the second operation instruction.
By applying the embodiment shown in fig. 6 of the invention, the control of the large screen is realized and the convenience of the control is improved by the operation of the user in the image aiming at the condition that the camera is just performing image acquisition on the large screen.
Fig. 8 is a schematic flow chart of a third method for controlling a large screen according to an embodiment of the present invention, where the embodiment of fig. 8 mainly aims at a situation where a camera is not directly capturing an image of the large screen, and the embodiment of fig. 8 includes:
s801: acquiring an image of a large screen to be controlled by a first camera in the binocular cameras; wherein the first camera is a left camera or a right camera.
In the embodiment of fig. 8, it is assumed that the execution subject is a handheld terminal in which a binocular camera is provided. Referring to fig. 7, a left camera or a right camera of a binocular camera performs image acquisition for a large screen to be controlled. For convenience of description, a camera that performs image acquisition on a large screen to be controlled is referred to as a first camera.
S802: a first operation instruction for an image to be processed is received.
For convenience of description, an image to which the first operation instruction is directed is referred to as a to-be-processed image. For example, the terminal may display the acquired image to a user, and the user may perform an operation on the displayed image, where the image targeted by the operation is the image to be processed, and the operation may be considered as a first operation instruction sent for the image to be processed. In one case, a touch screen may be disposed in the terminal, and the terminal displays an image to a user on the touch screen, and the user operates the image with a finger.
One or more pictures can be displayed in the large screen, and the pictures can be pictures of various signal sources. Correspondingly, the image to be processed includes a large screen area, and one or more pictures may also be displayed in the large screen area, as shown in fig. 1b, fig. 1b is a schematic diagram of the image to be processed, and fig. 1b includes a picture 1, a picture 2, and a picture 3, which may correspond to pictures of each signal source in the large screen. The number of the display pictures in the large screen is not limited, and in addition, the pictures can be closely adjacent or separated by some distance, and the specific separation distance is not limited. The user can adjust the position of the display picture in the image to be processed, such as dragging, zooming in, zooming out and the like. Or, the user may also open a new screen and determine the position of the new screen in the image to be processed, that is, the first operation instruction may be an instruction to adjust the screen or an instruction to open the new screen and determine the position of the new screen, and specific content of the first operation instruction is not limited.
S803: and determining a mapping relation of pixel points in the image to be processed, which is mapped to a three-dimensional space coordinate system, as a first part mapping relation according to the camera distance of the binocular camera, the distance between the imaging plane and the lens of the first camera and the imaging point distance of the binocular camera.
And the point of the pixel point in the image to be processed, which is mapped to the three-dimensional space coordinate system, is positioned on the plane where the large screen to be controlled is positioned.
In this embodiment, the pixel points in the image to be processed are mapped to the three-dimensional space coordinate system, and then mapped to the large screen to be controlled from the three-dimensional space coordinate system. In other words, the image coordinate system is mapped to the three-dimensional space coordinate system and then mapped to the large-screen coordinate system.
And after the pixel points in the image to be processed are mapped to the three-dimensional space coordinate system, the pixel points are positioned on the plane where the large screen in the real space is positioned. For example, mapping the pixel points in the image to be processed to the three-dimensional space coordinate system may be divided into two steps: firstly, determining coordinate values of pixel points in the image to be processed in a coordinate system with the first camera as an origin as offset coordinate values; and secondly, determining a mapping relation of the offset coordinate value mapped to a three-dimensional space coordinate system as the first part mapping relation according to the camera distance of the binocular camera, the distance between an imaging plane and the lens of the first camera and the imaging point distance of the binocular camera.
Specifically, in the first step, the image pixel point is shifted into a coordinate system using the first camera as an origin, as shown in fig. 3, it can be understood that the corner point 2 is shifted to a central point O (the first camera corresponds to a central point of the image collected by itself), that is, the coordinate point in the image coordinate system is moved to the lower right; assuming that the coordinate value of a pixel point 1 in the image to be processed in the image coordinate system is (u, v), and the coordinate value of the pixel point 1 in the coordinate system with the first camera as the origin is (P, Q), P is u-L/2, and Q is-v + W/2; l denotes a length of the image to be processed, and W denotes a width of the image to be processed.
In the second step, a coordinate system with the first camera as an origin is mapped to a three-dimensional space coordinate system, and still taking the pixel 1 as an example, assuming that the coordinate values of the pixel 1 in the three-dimensional space coordinate system are (X, Y, Z), referring to the triangulation principle in fig. 2, Z can be obtained as f b/d'; x is P X Z/f; y is Q Z/f; d' represents an imaging point interval of the binocular camera in a coordinate system with the first camera as an origin. Fig. 2 shows an xz coordinate system, and a yz coordinate system has the same principle and is not described again.
Here, Z represents a Z-axis coordinate value, and Z in fig. 2 and Z in other contents represent a distance between the lens and the large screen.
After the first part of mapping relation is obtained, all the pixel points in the large screen area in the image to be processed can be mapped to the three-dimensional space coordinate system. Referring to fig. 3, for convenience of description, when the large screen area in the image to be processed is an irregular quadrangle, one side of the quadrangle is denoted as side 1, the other side of the quadrangle is denoted as side 2, and the side 1 and the side 2 intersect at the corner point 2. Assume that there is a point 3 on edge 1 and a point 4 on edge 2, and that point 3 and corner 2 form a vector 1 and point 4 and corner 2 form a vector 2.
The vector 1 is mapped into a three-dimensional space coordinate system to be a first edge vector, and the vector 2 is mapped into the three-dimensional space coordinate system to be a second edge vector.
S804: and determining the mapping relation between the point of the pixel point in the image to be processed, which is mapped to the three-dimensional space coordinate system, and the pixel point in the large screen to be controlled as the second part mapping relation.
The second part of mapping relation from the three-dimensional space coordinate system to the large-screen coordinate system is as follows:
Figure BDA0001980510700000241
Figure BDA0001980510700000242
wherein (A, B) represents the coordinates of the pixel points in the large screen to be controlled in the coordinate system of the large screen to be controlled,
Figure BDA0001980510700000243
representing the first edge vector, α representing an included angle between a target vector and the first edge vector, β representing an included angle between the target vector and the second edge vector, wherein the target vector is a vector formed by a pixel point in a large screen to be controlled and a corner point of the large screen to be controlled in the three-dimensional space coordinate system.
Referring to fig. 3, the target vector is a vector that is mapped to the three-dimensional space coordinate system by the vector formed by the pixel 1 and the corner 2.
The mapping process described above can be expressed as: (u, v) → (P, Q) → (X, Y, Z) → (a, B); wherein (u, v) represents coordinate values in an image coordinate system, (P, Q) represents coordinate values in a coordinate system with the first camera as an origin, (X, Y, Z) represents coordinate values in a three-dimensional space coordinate system, and (a, B) represents coordinate values in a large-screen coordinate system. The mapping relations are the mapping relations between the pixel points in the image to be processed and the pixel points in the large screen to be controlled.
S805: and converting the first operation instruction into a second operation instruction aiming at the large screen to be controlled according to the first part mapping relation and the second part mapping relation.
S806: and controlling the large screen to be controlled based on a second operation instruction.
By applying the embodiment shown in fig. 8 of the invention, the control of the large screen is realized and the convenience of the control is improved by the operation of the user in the image aiming at the condition that the camera is not just performing image acquisition on the large screen.
Corresponding to the above method embodiment, an embodiment of the present invention further provides a large screen control device, as shown in fig. 9, including:
the receiving module 901 is configured to receive a first operation instruction for an image to be processed, where the image to be processed is obtained by performing image acquisition for a large screen to be controlled;
an obtaining module 902, configured to obtain a mapping relationship between a pixel point in the image to be processed and a pixel point in the large screen to be controlled;
a first conversion module 903, configured to convert the first operation instruction into a second operation instruction for the large screen to be controlled according to the mapping relationship;
and a control module 904, configured to control the large screen to be controlled based on the second operation instruction.
As an implementation manner, the obtaining module 902 may include: a first determination submodule, a calculation submodule and a second determination submodule (not shown in the figure), wherein,
the first determining submodule is used for determining the distance between a camera for acquiring images of the large screen to be controlled and the large screen to be controlled as a first distance;
a calculation submodule, configured to calculate a ratio of the first distance to the second distance, where the second distance is: a distance of an imaging plane from a lens of the camera;
and the second determining submodule is used for determining the mapping relation between the pixel points in the image to be processed and the pixel points in the large screen to be controlled according to the ratio.
As an embodiment, the mapping relationship is:
A1=(u1-u2)*z/f;B1=(v1-v2)*z/f;
wherein (a1, B1) represents the coordinates of the pixel points in the large screen to be controlled, (u1, v1) represents the coordinates of the pixel points in the image to be processed, (u2, v2) represents the coordinates of the corner points of the large screen contained in the image to be processed, z represents the first distance, and f represents the second distance.
As an embodiment, the apparatus further comprises:
a first obtaining module (not shown in the figure) for acquiring an image of the large screen to be controlled through a first camera of the binocular cameras to obtain an image to be processed; wherein the first camera is a left camera or a right camera;
the first determining submodule is specifically configured to:
and calculating the distance between the first camera and the large screen to be controlled as a first distance according to the camera distance of the binocular cameras, the distance between an imaging plane and the lens of the first camera and the imaging point distance of the binocular cameras.
As an embodiment, the first determining submodule is specifically configured to:
calculating the first distance using the following equation:
z ═ f × b/d; wherein z represents the first distance, f represents a distance between an imaging plane and a lens of the first camera, b represents a camera pitch of the binocular camera, and d represents an imaging point pitch of the binocular camera.
As an embodiment, the apparatus further comprises:
a second obtaining module (not shown in the figure) for acquiring an image of the large screen to be controlled through the first camera of the binocular cameras to obtain an image to be processed; wherein the first camera is a left camera or a right camera;
the mapping relationship comprises: a first partial mapping relationship and a second partial mapping relationship; an obtaining module 902, comprising:
the third determining submodule is used for determining a mapping relation of pixel points in the image to be processed, which is mapped to a three-dimensional space coordinate system, as the first part mapping relation according to the camera distance of the binocular camera, the distance between an imaging plane and the lens of the first camera and the imaging point distance of the binocular camera; the point of the pixel point in the image to be processed, which is mapped to the three-dimensional space coordinate system, is positioned on the plane where the large screen to be controlled is positioned;
and the fourth determining submodule is used for determining the mapping relation between the point of the pixel point in the image to be processed, which is mapped to the three-dimensional space coordinate system, and the pixel point in the large screen to be controlled, and the mapping relation is used as the second part mapping relation.
As an embodiment, the third determining submodule is specifically configured to:
determining coordinate values of pixel points in the image to be processed in a coordinate system with the first camera as an origin as offset coordinate values;
and determining a mapping relation of the offset coordinate value to a three-dimensional space coordinate system according to the camera distance of the binocular camera, the distance between an imaging plane and the lens of the first camera and the imaging point distance of the binocular camera, and taking the mapping relation as the first part mapping relation.
As an embodiment, the third determining sub-module is further configured to:
calculating the offset coordinate value of the pixel point in the image to be processed by using the following formula:
P-u-L/2, Q-v + W/2; wherein, (u, v) represents the coordinate value of the pixel point in the image coordinate system in the image to be processed, (P, Q) represents the offset coordinate value of the pixel point in the image to be processed, L represents the length of the image to be processed, and W represents the width of the image to be processed;
determining that the first partial mapping relationship is:
z ═ f × b/d'; x is P X Z/f; y is Q Z/f; wherein (X, Y, Z) represents coordinate values in the three-dimensional space coordinate system, f represents a distance of an imaging plane from a lens of the first camera, b represents a camera pitch of the binocular camera, and d' represents an imaging point pitch of the binocular camera in a coordinate system with the first camera as an origin;
mapping pixel points in the image to be processed to the three-dimensional space coordinate system according to the first part mapping relation;
determining a first edge vector and a second edge vector of a large screen to be controlled in the three-dimensional space coordinate system;
the fourth determining submodule is further configured to:
determining that the second partial mapping relationship is:
Figure BDA0001980510700000271
wherein (A, B) represents the coordinate value of the pixel point in the large screen to be controlled in the coordinate system of the large screen to be controlled,
Figure BDA0001980510700000272
representing a vector formed by a pixel point in the large screen to be controlled and a corner point of the large screen to be controlled in the three-dimensional space coordinate system, α representing
Figure BDA0001980510700000273
And β represents the angle between the target vector and the first edge vector.
As an implementation manner, the first operation instruction is an instruction for adjusting a position of a screen, where the instruction includes a first pre-adjustment position and a first post-adjustment position, and both the first pre-adjustment position and the first post-adjustment position are positions in the image to be processed;
the first conversion module 903 may be specifically configured to:
converting the first position before adjustment into a second position before adjustment according to the mapping relation; converting the first adjusted position into a second adjusted position according to the mapping relation to obtain a second operation instruction comprising the second pre-adjusted position and the second post-adjusted position, wherein the second pre-adjusted position and the second post-adjusted position are positions in the large screen to be controlled;
the control module 904 may be specifically configured to:
determining a picture to be adjusted in the large screen to be controlled according to the second front adjustment position; and adjusting the picture to be adjusted to the second adjusted position.
Optionally, the control module 904 is further configured to interchange the picture displayed at the second adjusted position with the picture to be adjusted;
or, the picture to be adjusted is adjusted to the second adjusted position by zooming the picture to be adjusted.
Optionally, the control module 904 is further configured to calculate a picture scaling factor according to a position relationship between the second position before adjustment and the second position after adjustment; and zooming the picture to be adjusted according to the picture zooming coefficient to obtain a zoomed picture, wherein the zoomed picture is positioned at the second adjusted position.
Optionally, the control module 904 is further configured to calculate a pre-adjustment size of the picture to be adjusted according to a position relationship between the second pre-adjustment position and a preset position in the picture to be adjusted; calculating the adjusted size of the picture to be adjusted according to the position relation between the second adjusted position and the preset position in the picture to be adjusted; and calculating a picture scaling coefficient of the adjusted size relative to the pre-adjusted size.
In one embodiment, the first operation instruction is a new picture display instruction, which includes a new picture identifier and a first display position of a new picture;
the first conversion module 903 may be specifically configured to: converting the first display position into a second display position according to the mapping relation to obtain a second operation instruction comprising the new picture identifier and the second display position;
the control module 904 may be specifically configured to: and displaying the picture corresponding to the new picture identification at the second display position of the large screen to be controlled.
As an embodiment, the apparatus further comprises: an identification module and a second conversion module (not shown in the figure), wherein,
the identification module is used for identifying the edge of each picture in the image to be processed as a first edge;
the second conversion module is used for converting each first edge into an edge of the picture in the large screen to be controlled as a second edge according to the mapping relation;
the control module 904 may be specifically configured to:
determining a picture to be adjusted corresponding to the second operation instruction according to each second edge; and adjusting the picture to be adjusted according to the second operation instruction.
As an embodiment, the identification module may include:
the correction submodule is used for correcting a non-rectangular picture in the image to be processed into a rectangular picture to obtain a corrected image;
and the identification submodule is used for identifying the edge of each picture in the corrected image as a first edge.
As an embodiment, the identifier module may be specifically configured to:
identifying horizontal and vertical lines in the corrected image;
counting the number of pixel points included by each identified horizontal line and vertical line;
and selecting lines of which the number of the pixel points meets the preset condition as a first edge.
When the embodiment of the invention is applied to large-screen control, a first operation instruction for an image to be processed is received; acquiring a mapping relation between pixel points in an image to be processed and pixel points in a large screen to be controlled; converting the first operation instruction into a second operation instruction for the large screen to be controlled according to the mapping relation; controlling the large screen to be controlled based on the second operation instruction; therefore, according to the scheme, the large screen is controlled through the operation of the user in the image, and the convenience of control is improved.
An embodiment of the present invention further provides an electronic device, as shown in fig. 10, including a processor 1001 and a memory 1002;
a memory 1002 for storing a computer program;
the processor 1001 is configured to implement any of the above-described large screen control methods when executing the program stored in the memory 1002.
The Memory mentioned in the above electronic device may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
As an embodiment, the electronic device further comprises:
the binocular camera is used for acquiring images of a large screen to be controlled through a first camera in the binocular camera and sending the acquired images to the processor; wherein the first camera is a left camera or a right camera.
As an embodiment, the electronic device further comprises:
and the touch screen is used for displaying the image acquired by the first camera and receiving a first operation instruction of a user for the image to be processed.
As shown in fig. 11, the electronic device may include: the electronic device includes a processor 1001, a memory 1002, a binocular camera 1003 and a touch screen 1004, and the electronic device may be a handheld terminal, such as a mobile phone, a tablet computer, and the like, without limitation.
The embodiment of the present invention further provides a computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements any one of the above-mentioned large screen control methods.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus embodiment, the device embodiment, and the computer-readable storage medium embodiment, since they are substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (19)

1. A large screen control method is characterized by comprising the following steps:
receiving a first operation instruction for an image to be processed, wherein the image to be processed is obtained by image acquisition for a large screen to be controlled;
acquiring a mapping relation between pixel points in the image to be processed and pixel points in the large screen to be controlled;
converting the first operation instruction into a second operation instruction aiming at the large screen to be controlled according to the mapping relation;
and controlling the large screen to be controlled based on the second operation instruction.
2. The method according to claim 1, wherein the obtaining of the mapping relationship between the pixel points in the image to be processed and the pixel points in the large screen to be controlled comprises:
determining the distance between a camera for acquiring images of the large screen to be controlled and the large screen to be controlled as a first distance;
calculating a ratio of the first distance to the second distance, the second distance being: a distance of an imaging plane from a lens of the camera;
and determining the mapping relation between the pixel points in the image to be processed and the pixel points in the large screen to be controlled according to the ratio.
3. The method of claim 2, further comprising:
acquiring an image of a large screen to be controlled by a first camera in a binocular camera to obtain an image to be processed; wherein the first camera is a left camera or a right camera;
the determining a distance between a camera for acquiring an image of the large screen to be controlled and the large screen to be controlled as a first distance includes:
and calculating the distance between the first camera and the large screen to be controlled as a first distance according to the camera distance of the binocular cameras, the distance between an imaging plane and the lens of the first camera and the imaging point distance of the binocular cameras.
4. The method of claim 1, further comprising:
acquiring an image of a large screen to be controlled by a first camera in a binocular camera to obtain an image to be processed; wherein the first camera is a left camera or a right camera;
the mapping relationship comprises: a first partial mapping relationship and a second partial mapping relationship; the obtaining of the mapping relationship between the pixel points in the image to be processed and the pixel points in the large screen to be controlled includes:
determining a mapping relation of pixel points in the image to be processed, which is mapped to a three-dimensional space coordinate system, as the first part mapping relation according to the camera distance of the binocular camera, the distance between an imaging plane and the lens of the first camera and the imaging point distance of the binocular camera; the point of the pixel point in the image to be processed, which is mapped to the three-dimensional space coordinate system, is positioned on the plane where the large screen to be controlled is positioned;
and determining the mapping relation between the point of the pixel point in the image to be processed, which is mapped to the three-dimensional space coordinate system, and the pixel point in the large screen to be controlled as the second part mapping relation.
5. The method according to claim 4, wherein the determining, as the first partial mapping relationship, a mapping relationship of pixel points in the image to be processed to a three-dimensional space coordinate system according to the camera distance of the binocular camera, the distance between an imaging plane and the lens of the first camera, and the imaging point distance of the binocular camera comprises:
determining coordinate values of pixel points in the image to be processed in a coordinate system with the first camera as an origin as offset coordinate values;
and determining a mapping relation of the offset coordinate value to a three-dimensional space coordinate system according to the camera distance of the binocular camera, the distance between an imaging plane and the lens of the first camera and the imaging point distance of the binocular camera, and taking the mapping relation as the first part mapping relation.
6. The method according to claim 5, wherein the determining coordinate values of pixel points in the image to be processed in a coordinate system with the first camera as an origin as offset coordinate values comprises:
calculating the offset coordinate value of the pixel point in the image to be processed by using the following formula:
P-u-L/2, Q-v + W/2; wherein, (u, v) represents the coordinate value of the pixel point in the image coordinate system in the image to be processed, (P, Q) represents the offset coordinate value of the pixel point in the image to be processed, L represents the length of the image to be processed, and W represents the width of the image to be processed;
the first partial mapping relationship is:
z ═ f × b/d'; x is P X Z/f; y is Q Z/f; wherein (X, Y, Z) represents coordinate values in the three-dimensional space coordinate system, f represents a distance of an imaging plane from a lens of the first camera, b represents a camera pitch of the binocular camera, and d' represents an imaging point pitch of the binocular camera in a coordinate system with the first camera as an origin;
the method further comprises the following steps:
mapping pixel points in the image to be processed to the three-dimensional space coordinate system according to the first part mapping relation;
determining a first side vector and a second side vector of a large screen to be controlled in the three-dimensional space coordinate system, wherein the intersection point of the first side vector and the second side vector is the angular point of the large screen to be controlled;
the second partial mapping relationship is:
Figure FDA0001980510690000031
wherein (A, B) represents the coordinate value of the pixel point in the large screen to be controlled in the coordinate system of the large screen to be controlled,
Figure FDA0001980510690000032
representing a vector formed by pixel points and the corner points in the large screen to be controlled in the three-dimensional space coordinate system, α representing
Figure FDA0001980510690000033
Angle to said second edge vector, β representing said eyeAnd the included angle between the mark vector and the first edge vector.
7. The method according to claim 1, wherein the first operation instruction is an instruction for adjusting a position of a screen, and includes a first pre-adjustment position and a first post-adjustment position, and both the first pre-adjustment position and the first post-adjustment position are positions in the image to be processed;
the converting the first operation instruction into a second operation instruction for the large screen to be controlled according to the mapping relationship comprises:
converting the first position before adjustment into a second position before adjustment according to the mapping relation;
converting the first adjusted position into a second adjusted position according to the mapping relation to obtain a second operation instruction comprising the second pre-adjusted position and the second post-adjusted position, wherein the second pre-adjusted position and the second post-adjusted position are positions in the large screen to be controlled;
the controlling the large screen to be controlled based on the second operation instruction comprises the following steps:
determining a picture to be adjusted in the large screen to be controlled according to the second front adjustment position;
and adjusting the picture to be adjusted to the second adjusted position.
8. The method according to claim 7, wherein the adjusting the to-be-adjusted screen to the second adjusted position comprises:
exchanging the picture displayed at the second adjusted position with the picture to be adjusted;
or, the picture to be adjusted is adjusted to the second adjusted position by zooming the picture to be adjusted.
9. The method according to claim 8, wherein the adjusting the picture to be adjusted to the second adjusted position by scaling the picture to be adjusted comprises:
calculating a picture scaling factor according to the position relation between the second position before adjustment and the second position after adjustment;
and zooming the picture to be adjusted according to the picture zooming coefficient to obtain a zoomed picture, wherein the zoomed picture is positioned at the second adjusted position.
10. The method according to claim 9, wherein calculating a picture scaling factor according to the position relationship between the second pre-adjustment position and the second post-adjustment position comprises:
calculating the size of the picture to be adjusted before adjustment according to the position relation between the second position before adjustment and the preset position in the picture to be adjusted;
calculating the adjusted size of the picture to be adjusted according to the position relation between the second adjusted position and the preset position in the picture to be adjusted;
and calculating a picture scaling coefficient of the adjusted size relative to the pre-adjusted size.
11. The method according to claim 1, wherein the first operation instruction is a new picture display instruction, which includes a new picture identifier and a first display position of a new picture; the converting the first operation instruction into a second operation instruction for the large screen to be controlled according to the mapping relationship comprises:
converting the first display position into a second display position according to the mapping relation to obtain a second operation instruction comprising the new picture identifier and the second display position;
the controlling the large screen to be controlled based on the second operation instruction comprises the following steps:
and displaying the picture corresponding to the new picture identification at the second display position of the large screen to be controlled.
12. The method of claim 1, further comprising:
identifying the edge of each picture in the image to be processed as a first edge;
converting each first edge into an edge of the picture in the large screen to be controlled as a second edge according to the mapping relation;
the controlling the large screen to be controlled based on the second operation instruction comprises the following steps:
determining a picture to be adjusted corresponding to the second operation instruction according to each second edge;
and adjusting the picture to be adjusted according to the second operation instruction.
13. The method according to claim 12, wherein the identifying an edge of each picture in the image to be processed as a first edge comprises:
correcting a non-rectangular picture in the image to be processed into a rectangular picture to obtain a corrected image;
and identifying the edge of each picture in the corrected image as a first edge.
14. The method of claim 13, wherein the identifying an edge of each picture in the corrected image as a first edge comprises:
identifying horizontal and vertical lines in the corrected image;
counting the number of pixel points included by each identified horizontal line and vertical line;
and selecting lines of which the number of the pixel points meets the preset condition as a first edge.
15. A large screen control device, comprising:
the device comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving a first operation instruction aiming at an image to be processed, and the image to be processed is obtained by carrying out image acquisition aiming at a large screen to be controlled;
the acquisition module is used for acquiring the mapping relation between the pixel points in the image to be processed and the pixel points in the large screen to be controlled;
the first conversion module is used for converting the first operation instruction into a second operation instruction aiming at the large screen to be controlled according to the mapping relation;
and the control module is used for controlling the large screen to be controlled based on the second operation instruction.
16. An electronic device comprising a processor and a memory;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 14 when executing a program stored in the memory.
17. The apparatus of claim 16, further comprising:
the binocular camera is used for acquiring images of a large screen to be controlled through a first camera in the binocular camera and sending the acquired images to the processor; wherein the first camera is a left camera or a right camera.
18. The apparatus of claim 17, further comprising:
and the touch screen is used for displaying the image acquired by the first camera and receiving a first operation instruction of a user for the image to be processed.
19. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of the claims 1-14.
CN201910147561.7A 2019-02-27 2019-02-27 Large screen control method, device and equipment Active CN111625210B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910147561.7A CN111625210B (en) 2019-02-27 2019-02-27 Large screen control method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910147561.7A CN111625210B (en) 2019-02-27 2019-02-27 Large screen control method, device and equipment

Publications (2)

Publication Number Publication Date
CN111625210A true CN111625210A (en) 2020-09-04
CN111625210B CN111625210B (en) 2023-08-04

Family

ID=72271624

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910147561.7A Active CN111625210B (en) 2019-02-27 2019-02-27 Large screen control method, device and equipment

Country Status (1)

Country Link
CN (1) CN111625210B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116068945A (en) * 2023-03-07 2023-05-05 鼎擎科技有限公司 Building intelligent equipment control method and system

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101769723A (en) * 2008-12-30 2010-07-07 鸿富锦精密工业(深圳)有限公司 Electronic device and object shape parameter measurement method thereof
US20110188760A1 (en) * 2010-02-03 2011-08-04 Oculus Info Inc. System and Method for Creating and Displaying Map Projections related to Real-Time Images
CN102508565A (en) * 2011-11-17 2012-06-20 Tcl集团股份有限公司 Remote control cursor positioning method and device, remote control and cursor positioning system
US20120216149A1 (en) * 2011-02-18 2012-08-23 Samsung Electronics Co., Ltd. Method and mobile apparatus for displaying an augmented reality
US20130194217A1 (en) * 2012-02-01 2013-08-01 Jaejoon Lee Electronic device and method of controlling the same
CN103365572A (en) * 2012-03-26 2013-10-23 联想(北京)有限公司 Electronic equipment remote control method and electronic equipment
CN104899361A (en) * 2015-05-19 2015-09-09 华为技术有限公司 Remote control method and apparatus
CN106961590A (en) * 2017-03-23 2017-07-18 联想(北京)有限公司 Control method and electronic equipment
CN107424126A (en) * 2017-05-26 2017-12-01 广州视源电子科技股份有限公司 Method for correcting image, device, equipment, system and picture pick-up device and display device
CN107506162A (en) * 2017-08-29 2017-12-22 歌尔科技有限公司 Coordinate mapping method, computer-readable recording medium and projecting apparatus
WO2018076609A1 (en) * 2016-10-27 2018-05-03 中兴通讯股份有限公司 Terminal and method for operating terminal

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101769723A (en) * 2008-12-30 2010-07-07 鸿富锦精密工业(深圳)有限公司 Electronic device and object shape parameter measurement method thereof
US20110188760A1 (en) * 2010-02-03 2011-08-04 Oculus Info Inc. System and Method for Creating and Displaying Map Projections related to Real-Time Images
US20120216149A1 (en) * 2011-02-18 2012-08-23 Samsung Electronics Co., Ltd. Method and mobile apparatus for displaying an augmented reality
CN102508565A (en) * 2011-11-17 2012-06-20 Tcl集团股份有限公司 Remote control cursor positioning method and device, remote control and cursor positioning system
US20130194217A1 (en) * 2012-02-01 2013-08-01 Jaejoon Lee Electronic device and method of controlling the same
CN103365572A (en) * 2012-03-26 2013-10-23 联想(北京)有限公司 Electronic equipment remote control method and electronic equipment
CN104899361A (en) * 2015-05-19 2015-09-09 华为技术有限公司 Remote control method and apparatus
US20160342224A1 (en) * 2015-05-19 2016-11-24 Huawei Technologies Co., Ltd. Remote Control Method and Apparatus
WO2018076609A1 (en) * 2016-10-27 2018-05-03 中兴通讯股份有限公司 Terminal and method for operating terminal
CN106961590A (en) * 2017-03-23 2017-07-18 联想(北京)有限公司 Control method and electronic equipment
CN107424126A (en) * 2017-05-26 2017-12-01 广州视源电子科技股份有限公司 Method for correcting image, device, equipment, system and picture pick-up device and display device
CN107506162A (en) * 2017-08-29 2017-12-22 歌尔科技有限公司 Coordinate mapping method, computer-readable recording medium and projecting apparatus

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116068945A (en) * 2023-03-07 2023-05-05 鼎擎科技有限公司 Building intelligent equipment control method and system

Also Published As

Publication number Publication date
CN111625210B (en) 2023-08-04

Similar Documents

Publication Publication Date Title
US11758265B2 (en) Image processing method and mobile terminal
US7679643B2 (en) Remote instruction system, remote instruction method, and program product for remote instruction
JP4196216B2 (en) Image composition system, image composition method and program
JP5109803B2 (en) Image processing apparatus, image processing method, and image processing program
US20150213584A1 (en) Projection system, image processing apparatus, and correction method
WO2015081870A1 (en) Image processing method, device and terminal
JP2014225843A (en) Image processing apparatus, image processing method, and program
CN105812653A (en) Image pickup apparatus and image pickup method
CN114640833A (en) Projection picture adjusting method and device, electronic equipment and storage medium
JP2016510522A (en) Imaging apparatus and imaging method
CN104917972A (en) Method for remotely controlling picture taking, device and system
KR20060041116A (en) Apparatus and method for correcting distorted image and image display system using it
CN107368104B (en) Random point positioning method based on mobile phone APP and household intelligent pan-tilt camera
CN111625210B (en) Large screen control method, device and equipment
KR101082545B1 (en) Mobile communication terminal had a function of transformation for a picture
JP5996233B2 (en) Imaging device
CN114339179B (en) Projection correction method, apparatus, storage medium and projection device
JP5509986B2 (en) Image processing apparatus, image processing system, and image processing program
JP2006191408A (en) Image display program
CN115657893A (en) Display control method, display control device and intelligent equipment
US20150054854A1 (en) Image Cropping Manipulation Method and Portable Electronic Device
CN112532875B (en) Terminal device, image processing method and device thereof, and storage medium
JP2011010157A (en) Video display system and video display method
US11250640B2 (en) Measurement method, measurement device, and recording medium
CN110771147A (en) Method for adjusting parameters of shooting device, control equipment and shooting system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant