WO2021253189A1 - Dispositif électrique, procédé de balayage de commande de dispositif électrique, et support de stockage lisible par ordinateur - Google Patents

Dispositif électrique, procédé de balayage de commande de dispositif électrique, et support de stockage lisible par ordinateur Download PDF

Info

Publication number
WO2021253189A1
WO2021253189A1 PCT/CN2020/096213 CN2020096213W WO2021253189A1 WO 2021253189 A1 WO2021253189 A1 WO 2021253189A1 CN 2020096213 W CN2020096213 W CN 2020096213W WO 2021253189 A1 WO2021253189 A1 WO 2021253189A1
Authority
WO
WIPO (PCT)
Prior art keywords
box
image
faces
signal processor
image signal
Prior art date
Application number
PCT/CN2020/096213
Other languages
English (en)
Inventor
Chiaki Aoyama
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp., Ltd. filed Critical Guangdong Oppo Mobile Telecommunications Corp., Ltd.
Priority to PCT/CN2020/096213 priority Critical patent/WO2021253189A1/fr
Publication of WO2021253189A1 publication Critical patent/WO2021253189A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • the present invention relates to an electric device, a scanning method of controlling the electric device, and a computer readable storage medium.
  • the 3D scanners are used to obtain 3D models, but are expensive, complicated to operate, and require long processing times.
  • the mutual relationship can be displayed as a correct development view or three-dimensional view, by using a camera module of an electric device such as a smartphone, just by photographing the required face of the box, and by acquiring an image photographed from the front.
  • the present disclosure aims to solve at least one of the technical problems mentioned above. Accordingly, the present disclosure needs to provide an electric device and a scanning method of controlling electric device.
  • an electric device may include:
  • a camera module that takes a photograph of a subject to acquire a master camera image
  • a range sensor module that acquires range depth information of the subject by using a light
  • an image signal processor that controls the camera module and the range sensor module to acquire a camera image, based on the master camera image and the range depth information, wherein
  • the image signal processor acquires the master camera image including a plurality of faces of a box, and range depth information of the plurality of faces of the box, by controlling the camera module and the range sensor module,
  • the image signal processor selectively extracts images of the plurality of faces of the box from the master camera image by estimating the contours of the plurality of faces of the box, respectively, based on the master camera image and the range depth information,
  • the image signal processor performs a projective transformation of the extracted images of the plurality of faces of the box into front face images, each of which is viewed from the front, and
  • the image signal processor acquires image data synthesized so as to be an image of a development view of the box or a three-dimensional image of the box, by synthesizing the transformed images of the plurality of faces of the box transformed by the projective transformation.
  • the box is arranged on a reference plane.
  • the range sensor module emits a pulsed light to the subject and detects a reflected light of the pulsed light reflected from the subject, thereby acquiring time of flight (ToF) depth information as the range depth information.
  • ToF time of flight
  • the camera module and the range sensor module acquiring the master camera image and the range depth information of the box with a rectangular parallelepiped shape or a cubic shape.
  • the reference plane is a horizontal plane.
  • the image signal processor causes the display module to display an image of a development view of the box or a three-dimensional image of the box based on the synthesized image data.
  • the image signal processor obtains information on dimensions of the box based on the synthesized image data.
  • the image signal processor obtains the master camera image including the plurality of faces of the box by imaging the plurality of faces of the box with the camera module, in a state where the box is arranged on the reference plane, so that the master camera image includes a vertex at which corners of a plurality of faces of the box intersect, and
  • the image signal processor irradiates the plurality of faces of the box with pulsed light from the range sensor module, and acquires the ToF depth information of the plurality of faces of the box with the range sensor module.
  • the display module that displays predefined information
  • a main processor that controls the display module and the input module.
  • the image signal processor displays an image of the box being photographed by the camera module on the display module
  • the image signal processor sets a position designated in the image of the box as a vertex position of the box, in response to an operation input relating to an instruction of the vertex position of the box by a user to the input module.
  • the image signal processor estimates a plurality of plane parameters as point cloud data of the plurality of faces of the box in the master camera image, based on the range depth information, assuming that the plurality of faces of the box are perpendicular or parallel to the reference plane,
  • the image signal processor estimates intersection lines of two adjacent faces of the box corresponding to the plane parameter
  • the image signal processor sets an intersection point of the estimated plurality of intersection lines as a vertex of the box
  • the image signal processor estimates a contour line of the box with respect to a background in the master camera image, based on the master camera image and the range depth information, and
  • the image signal processor estimates contour of the plurality of faces of the box, based on the estimated intersection line and the contour line.
  • the image signal processor selects the intersection line passing through the set vertex position, as the contour line.
  • the image signal processor displays the image of the developed view of the box on the display module, the image of the developed view of the box obtained by synthesizing converted images of the plurality of faces of the box converted by the projective transformation, and
  • the image signal processor changes an arrangement of a plane of a development view of the box displayed on the display module, in response to an operation input relating to an instruction to arrange the face of the box by a user to the input module.
  • the image signal processor corrects the brightness of two adjacent sides of the box, for the converted image obtained by synthesizing the plurality of faces of the box, so that a brightness of the ends of two adjacent faces of the box is close.
  • a detection resolution of the range sensor module is lower than a detection resolution of the camera module.
  • a scanning method for controlling an electric device including: a camera module that takes a photograph of a subject to acquire a master camera image; a range sensor module that acquires range depth information of the subject by using a light; and an image signal processor that controls the camera module and the range sensor module to acquire a camera image, based on the master camera image and the range depth information,
  • the scanning method comprising:
  • image signal processor acquiring, by means of the image signal processor, image data synthesized so as to be an image of a development view of the box or a three-dimensional image of the box, by synthesizing the transformed images of the plurality of faces of the box transformed by the projective transformation.
  • the box is arranged on a reference plane.
  • the range sensor module emits a pulsed light to the subject and detects a reflected light of the pulsed light reflected from the subject, thereby acquiring time of flight (ToF) depth information as the range depth information.
  • ToF time of flight
  • a computer readable storage medium having a computer program stored thereon, wherein when the computer program is executed by a processor, the computer program implements a scanning method for controlling an electric device, and the scanning method comprises:
  • acquiring by means of the image signal processor, a master camera image including a plurality of faces of a box, and range depth information of the plurality of faces of the box, by controlling a camera module and a range sensor module,
  • image signal processor acquiring, by means of the image signal processor, image data synthesized so as to be an image of a development view of the box or a three-dimensional image of the box, by synthesizing the transformed images of the plurality of faces of the box transformed by the projective transformation.
  • the box is arranged on a reference plane.
  • the range depth information is time of flight (ToF) depth information.
  • ToF time of flight
  • FIG. 1 is a diagram illustrating an example of an arrangement of an electric device 100 and a subject 101 according to an embodiment of the present invention
  • FIG. 2 is a diagram illustrating an example of the configuration of the electric device 100 shown in FIG. 1;
  • FIG. 3 is a diagram illustrating an example of an overall flow for the electric device 100 shown in FIGS. 1 and 2 to acquire an image of the face of the box;
  • FIG. 4 is a diagram illustrating a specific example of the step S1 shown in FIG. 3 for acquiring the master camera image and ToF depth information;
  • FIG. 5 is a diagram illustrating an example of display on the display module 45 of the electric device 100 when the vertex position of the box 101 is designated;
  • FIG. 6 is a diagram showing a specific example of the top view processing (the step S2) shown in FIG. 1;
  • FIG. 7A is a diagram illustrating an example of the point cloud data 101T of the image of the plurality of faces of the box that is captured;
  • FIG. 7B is a diagram showing plane parameters, which are point cloud data, of an image of a face101Z of the box parallel to the reference plane (the horizontal plane) shown in FIG. 7A;
  • FIG. 7C is a diagram showing plane parameters, which are point cloud data, of an image of a face 101X of the box perpendicular to the reference face shown in FIG. 7A;
  • FIG. 7D is a diagram showing plane parameters, which are point cloud data, of an image of a face 101Y of the box perpendicular to the reference face shown in FIG. 7A;
  • FIG. 8A is a diagram illustrating an example of an intersection line 101A between two adjacent faces 101X and 101Y corresponding to plane parameters;
  • FIG. 8B is a diagram illustrating an example of a boundary 101G between two adjacent faces 101X and 101Y set by the intersection line illustrated in FIG. 8A;
  • FIG. 8C is a diagram illustrating an example of a state in which boundaries between three adjacent faces 101X, 101Y, and 101X are set;
  • FIG. 9 is a diagram illustrating an example of a specific flow of the contour estimation processing (the step S23) illustrated in FIG. 6;
  • FIG. 10A is a diagram illustrating an example of a face 101Y having an indefinite boundary 101F in the contour line estimation processing
  • FIG. 10B is a diagram illustrating an example of extracting a line segment that is a candidate for a contour line of the face 101Y based on a master camera image that is a color image, in the contour line estimation processing;
  • FIG. 10C is a diagram illustrating an example of contour line candidates extracted from the face 101Y, in the contour line estimation processing
  • FIG. 11A is a diagram illustrating an example of an image including the face 101Y in the master camera image, in the contour line estimation processing
  • FIG. 11B is a diagram illustrating an example of an image obtained by performing edge detection and binarization on the image of the face 101Y in the master camera image illustrated in FIG. 11A, in the contour line estimation processing;
  • FIG. 11C is a diagram illustrating an example of an image obtained by performing labeling of a binarized image on the image illustrated in FIG. 11B, in the contour line estimation processing;
  • FIG. 11D is a diagram illustrating an example of an image obtained by deleting an image having a pixel number equal to or less than a predetermined value from the image illustrated in FIG. 11C, in the contour estimation processing;
  • FIG. 12A is a diagram illustrating an example of an image obtained by performing the Hough transform on the image illustrated in FIG. 11D, in the contour line estimation processing;
  • FIG. 12B is a diagram illustrating an example of an image in which a line segment is detected from straight line candidates by the Hough transform with respect to the image illustrated in FIG. 12A;
  • FIG. 12C is a diagram illustrating an example of an image in which a predetermined line segment is selected from the detected line segments in the image illustrated in FIG. 12B;
  • FIG. 13A is a diagram illustrating an example in which candidates for the contour line 102 of each face are combined, in the contour estimation processing;
  • FIG. 13B is a diagram showing an example of an image of a box sectioned by the combined contour line shown in FIG. 13A;
  • FIG. 13C is a diagram illustrating an example in which candidates for the contour line 102 of each face are combined
  • FIG. 14 is a diagram showing an example of a specific flow of the image correction processing (the step S3) shown in FIG. 3;
  • FIG. 15A is a diagram illustrating an example of an image in which the outline of the box of the master camera image is set
  • FIG. 15B is a diagram illustrating an example of an image obtained by projective transforming the image of one face C of the box of the master camera image so as to be viewed from the front;
  • FIG. 15C is a diagram illustrating an example of an image obtained by projective transforming an image of one face B of the box of the master camera image so as to be viewed from the front;
  • FIG. 15D is a diagram illustrating an example of an image obtained by projective transforming the image of one face A of the box of the master camera image so as to be viewed from the front;
  • FIG. 16A is a diagram illustrating an example of images of a plurality of faces (faces A, B, and C) of the box of the master camera image captured from a certain direction;
  • FIG. 16B is a diagram illustrating a combination of sides of a plurality of faces (faces A, B, and C) of the box shown in FIG. 16A after projective transformation;
  • FIG. 16C is a diagram illustrating an example of images of a plurality of faces (faces A, D, and E) of the box of the master camera image captured from a direction different from FIG. 16A;
  • FIG. 16D is a diagram showing a combination of sides of a plurality of faces (faces A, D, and E) of the box shown in FIG. 16C after projective transformation;
  • FIG. 17A is a diagram illustrating an example of a processing of correcting the brightness of the image of each of the adjacent faces A, B, and C subjected to the projective transformation illustrated in FIG. 16B;
  • FIG. 17B is a diagram illustrating an example of an image after correcting the brightness of the image of each of the faces A, B, and C illustrated in FIG. 17A;
  • FIG. 18 is a diagram showing an example of a specific flow of the face combining processing (the step S5) shown in FIG. 3;
  • FIG. 19 is a diagram illustrating an example of a processing of selecting a corresponding face A from a plurality of plane images in which the combinations illustrated in FIGS. 16B and 16D are set;
  • FIG. 20A is a diagram illustrating an example of a processing of connecting the images of the plurality of faces illustrated in FIG. 19 based on the selected face A and updating the combination information;
  • FIG. 20B is a diagram illustrating an example of an arrangement of each face based on the combination information illustrated in FIG. 20A;
  • FIG. 21A is a diagram illustrating an example of a user's operation input to the display module 45 of the electric device 100, which is displayed in a development view of the box having the arrangement of each face in FIG. 20B;
  • FIG. 21B is a diagram showing an example of the display of the display module 45 that displays candidates for changing the arrangement of the faces of the developed view in response to the user's operation input in FIG. 21A;
  • FIG. 22A is a diagram illustrating an example of a user's operation input to the display module 45 of the electric device 100 displaying candidates for changing the arrangement of the face of the developed view in FIG. 21B;
  • FIG. 22B is a diagram showing an example of the display of the display module 45 that changes and displays the layout of the face of the developed view in response to the user's operation input in FIG. 22A;
  • FIG. 23A is a diagram illustrating an example of a development view of a box that is image data output by the electric device 100.
  • FIG. 23B is a diagram illustrating an example of a three-dimensional image of a box, which is image data output by the electric device 100.
  • FIG. 1 is a diagram illustrating an example of an arrangement of an electric device 100 and a subject 101 according to an embodiment of the present invention.
  • FIG. 2 is a diagram illustrating an example of the configuration of the electric device 100 shown in FIG. 1.
  • the electric device 100 includes a camera module 10, a range sensor module 20, and an image signal processor 30 that controls the camera module 10 and the range sensor module 20, and processes camera image data acquired from the camera module 10.
  • reference numeral 101 in FIG. 1 depicts a subject which is a box.
  • the camera module 10 includes, for example, a master lens 10a that is capable of focusing on a subject, a master image sensor 10b that detects an image inputted via the master lens 10a, and a master image sensor driver 10c that drives the master image sensor 10b, as shown in FIG. 2.
  • the camera module 10 includes, for example, a Gyro sensor 10d that the angular velocity and the acceleration of the camera module 10, a focus &OIS actuator 10f that actuates the master lens 10a, and a focus &OIS driver 10e that drives the focus &OIS actuator 10f, as shown in FIG. 2.
  • the camera module 10 acquires a master camera image of the subjects 101, for example (FIG. 1) .
  • the box 101 that is the subject 101 has a rectangular parallelepiped shape or a cubic shape.
  • the range sensor module 20 includes, for example, a ToF lens 20a, a range sensor 20b that detects the reflection light inputted via the ToF lens 20a, a range sensor driver 20c that drives the range sensor 20b, and a projector 20d that outputs the pulse lights.
  • the range sensor module 20 acquires range depth information of the subject 101 by using a light. Especially, the range sensor module 20 acquires the time of flight (ToF) depth information (ToF depth value) as the range depth information by emitting pulsed light toward the subjects 101, and detecting the reflection light from the subjects 101, for example.
  • ToF time of flight
  • the resolution of the detection by the range sensor module 20 is lower than the resolution of the detection by the camera module 10.
  • the image signal processor 30 controls, for example, the camera module 10 and the range sensor module 20 to acquire a camera image, which is the master camera image, based on the master camera image obtained by means of the camera module 10 and the ToF depth information (the range depth information) obtained by means of the range sensor module 20.
  • the electric device 100 includes a global navigation satellite system (GNSS) module 40, a wireless communication module 41, a CODEC 42, a speaker 43, a microphone 44, a display module 45, an input module 46, an inertial measurement unit (IMU) 47, a main processor 48, and a memory 49.
  • GNSS global navigation satellite system
  • IMU inertial measurement unit
  • the GNSS module 40 measures the current position of the electric device 100, for example.
  • the wireless communication module 41 performs wireless communications with the Internet, for example.
  • the CODEC 42 bidirectionally performs encoding and decoding, using a predetermined encoding/decoding method, as shown in FIG. 2 for example.
  • the speaker 43 outputs a sound in accordance with sound data decoded by the CODEC 42, for example.
  • the microphone 44 outputs sound data to the CODEC 42 based on inputted sound, for example.
  • the display module 45 displays predefined information.
  • the display module 45 is, for example, a touch panel.
  • the input module 46 receives a user’s input (a user’s operations) .
  • the input module 46 is included in, for example, the touch panel.
  • An IMU 47 detects, for example, the angular velocity and the acceleration of the electric device 100.
  • the main processor 48 controls the global navigation satellite system (GNSS) module 40, the wireless communication module 41, the CODEC 42, the speaker 43, the microphone 44, the display module 45, the input module 46, and the IMU 47.
  • GNSS global navigation satellite system
  • the memory 49 stores a program and data required for the image signal processor 30 to control the camera module 10 and the range sensor module 20, acquired image data, and a program and data required for the main processor 48 to control the electric device 100.
  • the memory 49 includes a computer readable storage medium having a computer program stored thereon, wherein when the computer program is executed by the main processor 48, the computer program implements a scanning method for controlling the electric device 100.
  • the scanning method comprises: acquiring, by means of the image signal processor, a master camera image including a plurality of faces of a box arranged on a reference plane, and range depth information of the plurality of faces of the box, by controlling a camera module and a range sensor module, extracting, by means of the image signal processor, images of the plurality of faces of the box from the master camera image by estimating the contours of the plurality of faces of the box, respectively, based on the master camera image and the range depth information, performing, by means of the image signal processor, a projective transformation of the extracted images of the plurality of faces of the box into front face images, each of which is viewed from the front, and acquiring, by means of the image signal processor, image data synthesized so as to be an image of a development view of the box or
  • the electric device 100 having the above-described configuration is a mobile phone such as a smartphone in this embodiment, but may be other types of electric devices (for instance, a tablet computer and a PDA) including camera modules.
  • FIG. 3 is a diagram illustrating an example of an overall flow for the electric device 100 shown in FIGS. 1 and 2 to acquire an image of the face of the box.
  • the image signal processor 30 controls the camera module 10 and the range sensor module 20 to take a master camera image including a plurality of faces of the box 101 arranged on the reference plane 200, and ToF depth information of the plurality of faces of the box 101 arranged on the reference plane 200 by the operation of the user (the step S1) .
  • the image signal processor 30 captures the plurality of faces of the box 101 with the camera module 10 to obtain a master camera image including the plurality of faces of the box 101, in a state where the box 101 is arranged on the reference plane 200, so as to include a vertex where the corners of a plurality of faces of the box 101 intersect.
  • the image signal processor 30 acquires ToF depth information of a plurality of faces of the box 101 with the range sensor module 20, by irradiating pulse light to a plurality of faces of the box from the range sensor module 20.
  • the reference plane 200 on which the box 101 is arranged is, for example, a horizontal plane.
  • the top view processing is a process of estimating the contours of a plurality of faces of the box 101 and selectively extracting images of the plurality of faces of the box 101 from the master camera image, based on the master camera image and the range depth information.
  • This image correction processing is a processing of projective transforming the extracted images of the plurality of faces of the box 101 into images of the front face as viewed from the front (the perpendicular direction of the face) .
  • the image signal processor 30 determines whether or not additional photographing by the operation of the user (the step S4) .
  • the image signal processor 30 returns to the step S1, and the image signal processor 30 additionally acquires a master camera image including a plurality of other faces of the box 101 arranged on the reference plane 200, and a range depth information of the other faces of the box 101.
  • the image signal processor 30 determines in the step S4 that no additional photographing is necessary, the image signal processor 30 acquires image data synthesized so as to be an image of a development view of the box 101 or a three-dimensional image of the box 101, by synthesizing the converted images of the plurality of faces of the box 101 converted by the projective transformation (the step S5) .
  • the image signal processor 30 may acquire information on the dimensions of the box 101 based on the synthesized image data.
  • the image signal processor 30 outputs data by displaying the image of the developed view of the box 101 or the three-dimensional image of the box 101 on the display module 45 (the step S6) , based on the image data synthesized in step S5.
  • the box 101 is composed of a plurality of planes (faces) , and two adjacent planes (faces) are orthogonal. That is, the box 101 has a rectangular parallelepiped shape or a cubic shape.
  • (f) The user changes the relative position of the electric device 100 with respect to the box 101, thereby photographing a necessary face of the box 101. For example, the user rotates the box 101 to capture the required face.
  • FIG. 4 is a diagram illustrating a specific example of step S1 shown in FIG. 3 for acquiring the master camera image and ToF depth information.
  • FIG. 5 is a diagram illustrating an example of display on the display module 45 of the electric device 100 when the vertex position of the box 101 is designated.
  • the image signal processor 30 displays an image of the box 101 being photographed by the camera module 10 on the display module 45, for example, as shown in FIG. 5.
  • the image signal processor 30 sets the position designated by the user in the image of the box 101 as the vertex position 45a of the box, in response to an operation input relating to an instruction of the vertex position 45a of the box 101 by the user to the input module 46 (the touch panel) .
  • the vertex position is first displayed at the center of the screen or at the previous operation position, or the result of the face estimation result is displayed. If the user wants to specify another position, the user touches the position where he wants to specify the touch panel.
  • the vertex position may be displayed at the touched position by the user touching the touch panel.
  • the shutter operation may be performed by the user touching the touch panel.
  • FIG. 6 is a diagram showing a specific example of the top view processing (the step S2) shown in FIG. 1.
  • FIG. 7A is a diagram illustrating an example of the point cloud data 101T of the image of the plurality of faces of the box that is captured.
  • FIG. 7B is a diagram showing plane parameters, which are point cloud data, of an image of a face101Z of the box parallel to the reference plane (the horizontal plane) shown in FIG. 7A.
  • FIG. 7C is a diagram showing plane parameters, which are point cloud data, of an image of a face 101X of the box perpendicular to the reference face shown in FIG. 7A.
  • FIG. 7A is a diagram illustrating an example of the point cloud data 101T of the image of the plurality of faces of the box that is captured.
  • FIG. 7B is a diagram showing plane parameters, which are point cloud data, of an image of a face101Z of the box parallel to the reference plane (the horizontal plane) shown in FIG. 7A.
  • FIG. 7C is
  • FIG. 7D is a diagram showing plane parameters, which are point cloud data, of an image of a face 101Y of the box perpendicular to the reference face shown in FIG. 7A.
  • FIG. 8A is a diagram illustrating an example of an intersection line 101A between two adjacent faces 101X and 101Y corresponding to plane parameters.
  • FIG. 8B is a diagram illustrating an example of a boundary 101G between two adjacent faces 101X and 101Y set by the intersection line illustrated in FIG. 8A.
  • FIG. 8C is a diagram illustrating an example of a state in which boundaries between three adjacent faces 101X, 101Y, and 101X are set.
  • the point cloud data of the reference plane is omitted.
  • the image signal processor 30 estimates a plurality of plane parameters as point cloud data of a plurality of faces of the box 101 in the master camera image based on the range depth information (FIG. 7A) , it is assumed that a plurality of faces of the box 101 are perpendicular or parallel (horizontal) to the reference plane 200 (the step S21 in FIG. 6) .
  • the image signal processor 30 estimates a plurality of plane parameters using, for example, RANSAC (Random Sample Consensus) .
  • the image signal processor 30 actually obtains the point cloud data also to the reference plane 200. However, the image signal processor 30 deletes the point cloud data of the face where the normal vector of the face of the box 101 is upward and the position of the face is the lowest as the point cloud data of the reference plane 200.
  • the image signal processor 30 estimates an intersection line 101A between two adjacent faces of the box corresponding to the plane parameter (the step S22 in FIG. 6) .
  • the image signal processor 30 sets the intersection point of the estimated plurality of intersection lines 101 as the vertex position of the box 101.
  • the image signal processor 30 selects the intersection line 101G passing through the set vertex position as the contour line, when the vertex position of the box 101 is set (for example, in FIG. 5) .
  • the image signal processor 30 excludes the combination in which the intersection of the face does not pass the specified position.
  • the image signal processor 30 estimates the contour line of the box with respect to the background in the master camera image, based on the master camera image and the range depth information (the step S23 in FIG. 6) .
  • the image signal processor 30 estimates the outline of the face of the box based on the estimated intersection and the contour line (the step S24 in FIG. 6) .
  • FIG. 9 is a diagram illustrating an example of a specific flow of the contour estimation processing (the step S23) illustrated in FIG. 6.
  • FIG. 10A is a diagram illustrating an example of a face 101Y having an indefinite boundary 101F in the contour line estimation processing.
  • FIG. 10B is a diagram illustrating an example of extracting a line segment that is a candidate for a contour line of the face 101Y based on a master camera image that is a color image, in the contour line estimation process.
  • FIG. 10C is a diagram illustrating an example of contour line candidates extracted from the face 101Y, in the contour line estimation processing.
  • FIG. 11A is a diagram illustrating an example of an image including the face 101Y in the master camera image, in the contour line estimation processing.
  • FIG. 11B is a diagram illustrating an example of an image obtained by performing edge detection and binarization on the image of the face 101Y in the master camera image illustrated in FIG. 11A, in the contour line estimation processing.
  • FIG. 11C is a diagram illustrating an example of an image obtained by performing labeling of a binarized image on the image illustrated in FIG. 11B, in the contour line estimation processing.
  • FIG. 11D is a diagram illustrating an example of an image obtained by deleting an image having a pixel number equal to or less than a predetermined value from the image illustrated in FIG. 11C, in the contour estimation processing.
  • FIG. 12A is a diagram illustrating an example of an image obtained by performing the Hough transform on the image illustrated in FIG. 11D, in the contour line estimation processing.
  • FIG. 12A is a diagram illustrating an example of an image obtained by performing the Hough transform on the image illustrated in FIG. 11D, in the contour line estimation processing.
  • FIG. 12B is a diagram illustrating an example of an image in which a line segment is detected from straight line candidates by the Hough transform with respect to the image illustrated in FIG. 12A.
  • FIG. 12C is a diagram illustrating an example of an image in which a predetermined line segment is selected from the detected line segments in the image illustrated in FIG. 12B.
  • the image signal processor 30 sets an area 101F in which the face 101Y (in FIG. 10A) having the indefinite boundary is extended by the margin from the range in which the point cloud data exists.
  • the image signal processor 30 extracts a line segment that is a contour line candidate in the range of the region of the RGB image (the steps S231 to S236 in FIG. 9) . Note that, depending on the image, a plurality of contour line candidates may exist.
  • the image signal processor 30 executes edge detection and binarization processing on the image including the face 101Y of the box 101 shown in FIG. 11A (the step S231 in FIG. 9) .
  • the image signal processor 30 labels the binarized image as shown in FIG. 11C (the step S232 in FIG. 9) .
  • the image signal processor 30 deletes an area having a small number of pixels in the labeled area (the step S233 in FIG. 9) .
  • the image signal processor 30 acquires straight line candidates 102F by Hough transform (the step S234 in FIG. 9) .
  • the image signal processor 30 detects line segments (the step S235 in FIG. 9) .
  • the image signal processor 30 tracks pixel values along the straight line candidate 102F, and sets a range in which the labeled pixel value is present as a line segment position.
  • the image signal processor 30 selects a line segment by deleting a short line segment among the line segments that intersect and have close angles (the step S236 in FIG. 9) .
  • the selected line segment 102 becomes a contour line.
  • FIG. 13A is a diagram illustrating an example in which candidates for the contour line 102 of each face are combined, in the contour estimation processing.
  • FIG. 13B is a diagram showing an example of an image of a box sectioned by the combined contour line shown in FIG. 13A.
  • FIG. 13C is a diagram illustrating an example in which candidates for the contour line 102 of each face are combined.
  • the image signal processor 30 selects a combination in which the distance between the contour candidate on another face and the end point 103 is short (FIGS. 13A and 13B) .
  • the image signal processor 30 may estimate the position of another side from the reference plane on which it is placed and the height of the upper face of the box 101.
  • FIG. 14 is a diagram showing an example of a specific flow of the image correction processing (the step S3) shown in FIG. 3.
  • FIG. 15A is a diagram illustrating an example of an image in which the outline of the box of the master camera image is set.
  • FIG. 15B is a diagram illustrating an example of an image obtained by projective transforming the image of one face C of the box of the master camera image so as to be viewed from the front.
  • FIG. 15C is a diagram illustrating an example of an image obtained by projective transforming an image of one face B of the box of the master camera image so as to be viewed from the front.
  • FIG. 15D is a diagram illustrating an example of an image obtained by projective transforming the image of one face A of the box of the master camera image so as to be viewed from the front.
  • the image signal processor 30 calculates a transformation matrix and performs projective transforming so that, for example, a rectangular area (the face C) of the master camera image (RGB image) shown in FIG. 15A is viewed from the front (FIG. 15B) . Further, the image signal processor 30 performs the same processing for the other rectangular areas (the face B in FIG. 15C and the face A in FIG. 15D) .
  • FIG. 16A is a diagram illustrating an example of images of a plurality of faces (the faces A, B, and C) of the box of the master camera image captured from a certain direction.
  • FIG. 16B is a diagram illustrating a combination of sides of a plurality of faces (the faces A, B, and C) of the box shown in FIG. 16A after projective transformation.
  • FIG. 16C is a diagram illustrating an example of images of a plurality of faces (the faces A, D, and E) of the box of the master camera image captured from a direction different from FIG. 16A.
  • FIG. 16D is a diagram showing a combination of sides of a plurality of faces (the faces A, D, and E) of the box shown in FIG. 16C after projective transformation.
  • the image signal processor 30 constructs the corresponding edge relationships shown in FIG. 16B.
  • the image signal processor 30 constructs the relationship between the corresponding sides shown in FIG. 16D.
  • FIG. 17A is a diagram illustrating an example of a processing of correcting the brightness of the image of each of the adjacent faces A, B, and C subjected to the projective transformation illustrated in FIG. 16B.
  • FIG. 17B is a diagram illustrating an example of an image after correcting the brightness of the image of each of the faces A, B, and C illustrated in FIG. 17A.
  • the image signal processor 30 corrects the brightness of two adjacent faces of the box 101, so that the brightness of the ends of two adjacent faces of the box 101 is close, with respect to the converted image obtained by synthesizing a plurality of faces of the box 101.
  • FIG. 18 is a diagram showing an example of a specific flow of the face combining processing (the step S5) shown in FIG. 3.
  • FIG. 19 is a diagram illustrating an example of a processing of selecting a corresponding face A from a plurality of plane images in which the combinations illustrated in FIGS. 16B and 16D are set.
  • FIG. 20A is a diagram illustrating an example of a processing of connecting the images of the plurality of faces illustrated in FIG. 19 based on the selected face A and updating the combination information.
  • FIG. 20B is a diagram illustrating an example of an arrangement of each face based on the combination information illustrated in FIG. 20A.
  • FIG. 21A is a diagram illustrating an example of a user's operation input to the display module 45 of the electric device 100, which is displayed in a development view of the box having the arrangement of each face in FIG. 20B.
  • FIG. 21B is a diagram showing an example of the display of the display module 45 that displays candidates for changing the arrangement of the faces of the developed view in response to the user's operation input in FIG. 21A.
  • FIG. 22A is a diagram illustrating an example of a user's operation input to the display module 45 of the electric device 100 displaying candidates for changing the arrangement of the face of the developed view in FIG. 21B.
  • FIG. 22B is a diagram showing an example of the display of the display module 45 that changes and displays the layout of the face of the developed view in response to the user's operation input in FIG. 22A.
  • the image signal processor 30 searches for a corresponding face (the face A in FIG. 19) based on the Sift feature amount and correlation in a combination of a plurality of faces subjected to the projective transformation (the step S51 in FIG. 18) .
  • the image signal processor 30 selects one face (the face A in FIG. 19) from the coincidence (the step S52 in FIG. 18) .
  • the image signal processor 30 calculates a three-dimensional distance between sides whose correspondence is not determined, and connects the closest sides based on the calculation result (the step S53 in FIG. 18) . Thereafter, the image signal processor 30 determines whether or not all combinations have been processed (the step S54 in FIG. 18) .
  • the image signal processor 30 is arranged such that adjacent planes approach each other based on an appropriate plane (for example, the face A) , as shown in FIG. 20B (the step S55 in FIG. 18) .
  • the face (the face B) to be changed is designated by touching an area of the face whose position is to be changed with a finger of the user on the display module 45 by the user operation.
  • the selected area is highlighted by the user's operation, and a movable area (the face B) is displayed on the display module 45.
  • a movable area the face B
  • an image of the movement may be displayed on the display module 45.
  • the image signal processor 30 displays an image of the developed view of the box 101, obtained by combining converted images of a plurality of faces of the box 101 converted by the projective transformation, on the display module 45.
  • the user touches a desired area on the display module 45 by an operation of the user.
  • the user may drag the finger without speaking and move it to the desired area, and release the finger there to instruct the movement.
  • the electric device 100 changes the layout of the face (the face B) to the selected area in the developed view of the box by the user's operation.
  • the image signal processor 30 changes the arrangement of the face of the development view of the box 101 to be displayed on the display module 45, in response to an operation input relating to an instruction to arrange the face of the box 101 by the user to the input module 46.
  • FIG. 23A is a diagram illustrating an example of a development view of a box that is image data output by the electric device 100.
  • FIG. 23B is a diagram illustrating an example of a three-dimensional image of a box, which is image data output by the electric device 100.
  • the image signal processor 30 outputs face data with texture information that can be used in CAD and 3D CG.
  • the display module 45 outputs an image surrounding the entire area.
  • an interval for each small area, a width of a surrounding boundary, a background color, and the like can be designated.
  • the display module 45 may output an image in which the viewpoint, the focal length, the illumination, the shadow, and the like are changed.
  • the image signal processor 30 outputs it as the face data with texture information that can be used in CAD and 3D CG.
  • a widened image of the box an image viewed of the box from any position can be acquired from the camera image of the box.
  • the present invention it is possible to obtain a widened image of the box, an image viewed of the box from any position and a size of the box, by simply photographing the box from an oblique direction.
  • This technology can be provided at low cost by being sold as a smartphone application.
  • the present invention does not require a large-scale device.
  • first and second are used herein for purposes of description and are not intended to indicate or imply relative importance or significance or to imply the number of indicated technical features.
  • a feature defined as “first” and “second” may comprise one or more of this feature.
  • a plurality of means “two or more than two” , unless otherwise specified.
  • the terms “mounted” , “connected” , “coupled” and the like are used broadly, and may be, for example, fixed connections, detachable connections, or integral connections; may also be mechanical or electrical connections; may also be direct connections or indirect connections via intervening structures; may also be inner communications of two elements which can be understood by those skilled in the art according to specific situations.
  • a structure in which a first feature is "on" or “below” a second feature may include an embodiment in which the first feature is in direct contact with the second feature, and may also include an embodiment in which the first feature and the second feature are not in direct contact with each other, but are in contact via an additional feature formed therebetween.
  • a first feature "on” , “above” or “on top of” a second feature may include an embodiment in which the first feature is orthogonally or obliquely “on” , “above” or “on top of” the second feature, or just means that the first feature is at a height higher than that of the second feature; while a first feature “below” , “under” or “on bottom of” a second feature may include an embodiment in which the first feature is orthogonally or obliquely “below” , "under” or “on bottom of” the second feature, or just means that the first feature is at a height lower than that of the second feature.
  • Any process or method described in a flow chart or described herein in other ways may be understood to include one or more modules, segments or portions of codes of executable instructions for achieving specific logical functions or steps in the process, and the scope of a preferred embodiment of the present disclosure includes other implementations, in which it should be understood by those skilled in the art that functions may be implemented in a sequence other than the sequences shown or discussed, including in a substantially identical sequence or in an opposite sequence.
  • the logic and/or step described in other manners herein or shown in the flow chart may be specifically achieved in any computer readable medium to be used by the instructions execution system, device or equipment (such as a system based on computers, a system comprising processors or other systems capable of obtaining instructions from the instructions execution system, device and equipment executing the instructions) , or to be used in combination with the instructions execution system, device and equipment.
  • the computer readable medium may be any device adaptive for including, storing, communicating, propagating or transferring programs to be used by or in combination with the instruction execution system, device or equipment.
  • the computer readable medium comprise but are not limited to: an electronic connection (an electronic device) with one or more wires, a portable computer enclosure (a magnetic device) , a random access memory (RAM) , a read only memory (ROM) , an erasable programmable read-only memory (EPROM or a flash memory) , an optical fiber device and a portable compact disk read-only memory (CDROM) .
  • the computer readable medium may even be a paper or other appropriate medium capable of printing programs thereon, this is because, for example, the paper or other appropriate medium may be optically scanned and then edited, decrypted or processed with other appropriate methods when necessary to obtain the programs in an electric manner, and then the programs may be stored in the computer memories.
  • each part of the present disclosure may be realized by the hardware, software, firmware or their combination.
  • a plurality of steps or methods may be realized by the software or firmware stored in the memory and executed by the appropriate instructions execution system.
  • the steps or methods may be realized by one or a combination of the following techniques known in the art: a discrete logic circuit having a logic gate circuit for realizing a logic function of a data signal, an application-specific integrated circuit having an appropriate combination logic gate circuit, a programmable gate array (PGA) , a field programmable gate array (FPGA) , etc.
  • each function cell of the embodiments of the present disclosure may be integrated in a processing module, or these cells may be separate physical existence, or two or more cells are integrated in a processing module.
  • the integrated module may be realized in a form of hardware or in a form of software function modules. When the integrated module is realized in a form of software function module and is sold or used as a standalone product, the integrated module may be stored in a computer readable storage medium.
  • the storage medium mentioned above may be read-only memories, magnetic disks, CD, etc.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un dispositif électrique (100) qui comprend un module (10) de caméra qui prend une photographie d'un sujet (101) pour acquérir une image maîtresse de caméra, un module (20) de capteur de distance qui acquiert des informations de profondeur de distance du sujet (101) en utilisant une lumière, et un processeur (30) de signal d'image qui commande le module (10) de caméra et le module (20) de capteur de distance pour acquérir une image de caméra, basée sur l'image maîtresse de caméra et les informations de profondeur de distance.
PCT/CN2020/096213 2020-06-15 2020-06-15 Dispositif électrique, procédé de balayage de commande de dispositif électrique, et support de stockage lisible par ordinateur WO2021253189A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/096213 WO2021253189A1 (fr) 2020-06-15 2020-06-15 Dispositif électrique, procédé de balayage de commande de dispositif électrique, et support de stockage lisible par ordinateur

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/096213 WO2021253189A1 (fr) 2020-06-15 2020-06-15 Dispositif électrique, procédé de balayage de commande de dispositif électrique, et support de stockage lisible par ordinateur

Publications (1)

Publication Number Publication Date
WO2021253189A1 true WO2021253189A1 (fr) 2021-12-23

Family

ID=79268911

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/096213 WO2021253189A1 (fr) 2020-06-15 2020-06-15 Dispositif électrique, procédé de balayage de commande de dispositif électrique, et support de stockage lisible par ordinateur

Country Status (1)

Country Link
WO (1) WO2021253189A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170264880A1 (en) * 2016-03-14 2017-09-14 Symbol Technologies, Llc Device and method of dimensioning using digital images and depth data
WO2017195650A1 (fr) * 2016-05-13 2017-11-16 ソニー株式会社 Dispositif de génération, procédé de génération, dispositif de reproduction et procédé de reproduction
WO2019048825A1 (fr) * 2017-09-06 2019-03-14 Fovo Technology Limited Procédé de génération et de modification d'images d'une scène 3d
CN110022439A (zh) * 2019-01-29 2019-07-16 威盛电子股份有限公司 全景视频图像稳定装置、编码方法及播放方法和评估方法
US20200097892A1 (en) * 2018-09-21 2020-03-26 Beijing Jingdong Shangke Information Technology Co., Ltd. System and method for automatic product enrollment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170264880A1 (en) * 2016-03-14 2017-09-14 Symbol Technologies, Llc Device and method of dimensioning using digital images and depth data
WO2017195650A1 (fr) * 2016-05-13 2017-11-16 ソニー株式会社 Dispositif de génération, procédé de génération, dispositif de reproduction et procédé de reproduction
WO2019048825A1 (fr) * 2017-09-06 2019-03-14 Fovo Technology Limited Procédé de génération et de modification d'images d'une scène 3d
US20200097892A1 (en) * 2018-09-21 2020-03-26 Beijing Jingdong Shangke Information Technology Co., Ltd. System and method for automatic product enrollment
CN110022439A (zh) * 2019-01-29 2019-07-16 威盛电子股份有限公司 全景视频图像稳定装置、编码方法及播放方法和评估方法

Similar Documents

Publication Publication Date Title
US10832432B2 (en) Method for training convolutional neural network to reconstruct an image and system for depth map generation from an image
JP7118244B2 (ja) グラフィックコード認識方法及び装置、並びに、端末及びプログラム
US11625896B2 (en) Face modeling method and apparatus, electronic device and computer-readable medium
JP5580164B2 (ja) 光学情報処理装置、光学情報処理方法、光学情報処理システム、光学情報処理プログラム
US10593014B2 (en) Image processing apparatus, image processing system, image capturing system, image processing method
US10915998B2 (en) Image processing method and device
US20190096092A1 (en) Method and device for calibration
US8441518B2 (en) Imaging apparatus, imaging control method, and recording medium
US11527014B2 (en) Methods and systems for calibrating surface data capture devices
US11557086B2 (en) Three-dimensional (3D) shape modeling based on two-dimensional (2D) warping
Wilm et al. Accurate and simple calibration of DLP projector systems
KR20190104260A (ko) 형태 검출
US20230245396A1 (en) System and method for three-dimensional scene reconstruction and understanding in extended reality (xr) applications
US9979858B2 (en) Image processing apparatus, image processing method and program
US9924066B2 (en) Image processing apparatus, information processing method, and program
CN115004685A (zh) 电子装置和用于在电子装置处显示图像的方法
JP5857712B2 (ja) ステレオ画像生成装置、ステレオ画像生成方法及びステレオ画像生成用コンピュータプログラム
US11770551B2 (en) Object pose estimation and tracking using machine learning
US11283970B2 (en) Image processing method, image processing apparatus, electronic device, and computer readable storage medium
WO2021253189A1 (fr) Dispositif électrique, procédé de balayage de commande de dispositif électrique, et support de stockage lisible par ordinateur
CN116912331A (zh) 标定数据生成方法、装置、电子设备及存储介质
CN112085842A (zh) 深度值确定方法及装置、电子设备和存储介质
WO2022016331A1 (fr) Procédé de compensation de carte de profondeur tof et dispositif électronique
KR102505659B1 (ko) 스마트폰 기반의 조명을 이용한 3차원 스캐닝 장치 및 방법
WO2022193310A1 (fr) Dispositif électrique, procédé de commande de dispositif électrique et support de stockage lisible par ordinateur

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20940578

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20940578

Country of ref document: EP

Kind code of ref document: A1