WO2019004971A2 - A system providing job interview experience to users - Google Patents

A system providing job interview experience to users Download PDF

Info

Publication number
WO2019004971A2
WO2019004971A2 PCT/TR2018/050167 TR2018050167W WO2019004971A2 WO 2019004971 A2 WO2019004971 A2 WO 2019004971A2 TR 2018050167 W TR2018050167 W TR 2018050167W WO 2019004971 A2 WO2019004971 A2 WO 2019004971A2
Authority
WO
WIPO (PCT)
Prior art keywords
unit
user
interview
data
server
Prior art date
Application number
PCT/TR2018/050167
Other languages
French (fr)
Other versions
WO2019004971A3 (en
Inventor
Pelin VARDARLIER
Original Assignee
T.C. Istanbul Medipol Universitesi
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by T.C. Istanbul Medipol Universitesi filed Critical T.C. Istanbul Medipol Universitesi
Publication of WO2019004971A2 publication Critical patent/WO2019004971A2/en
Publication of WO2019004971A3 publication Critical patent/WO2019004971A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/105Human resources
    • G06Q10/1053Employment or hiring
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student

Definitions

  • the present invention relates to a simulation system that allows users to experience a job interview through a user interface such as Virtual Reality.
  • a user interface such as Virtual Reality.
  • candidates are obtaining questions they may encounter during job interviews on the Internet or from applications and services provided on mobile platforms.
  • candidates are making an effort for it and are trying to find useful information from within a lot of scattered information.
  • the candidates cannot provide sufficient performance even if they know the questions because they cannot experience a real job interview atmosphere and therefore end up stressed in the real occasion.
  • some digital platforms are available for candidates. However, since these platforms do not provide adequate interfaces, they can not provide an adequate job interview environment for candidates For this reason, there is a need for a system that allows candidates to gain job interview experience by providing a real job interview environment during interview practice.
  • the object of this invention is to realize a system that allows users to gain experience for job interviews.
  • Another object of the present invention is to realize a system that allows users to be informed about the process in the job interviews.
  • Another aim of this invention is to realize a system that allows users to experience any job interview experience intensively with a simulated interface presented through virtual reality.
  • Fig. 1 a schematic block diagram of the system according to present invention.
  • the system (1) providing job interview experience to users via virtual reality interface according to present invention comprises;
  • At least one user interface unit (2) for interacting with a user (K), providing a virtual interview environment for the user (K) and measuring the gestures and behaviors of the user (K)
  • At least one server (3) which receives information of the user (K) from the user interface unit (2) and transmits to the user interface unit (2) the virtual environment and the interview information at least one interview information database (4) for registering job interview content information including different scenarios for different sector based job interviews, at least one language processing unit (5) for receiving, from the server (3), the answers formed by the user (K) in voice or text format via the hardware contained in the user interface unit (2) and optionally, providing conversion of text to voice and conversion of voice to text,
  • At least one behavior analysis unit (6) for analyzing the images of the user (K) received by means of the hardware contained in the user interface unit (2), interpreting the gestures and mimics of the user (K) in accordance with the previously defined rules included in the unit
  • the user interface unit (2) comprising data necessary for the virtual reality environment of the hardware contained in the user interface unit (2)
  • a reporting unit (8) which receives, stores and shares the evaluation data with the units it is in contact with, wherein the evaluation data are generated, by using the data of the language processing unit (5), the behavior analysis unit (6) and the virtual interviewing unit (7),
  • At least one social media integration unit (9) that integrates with social media platforms so that the users (K) can share their job interview experience results
  • the user interface unit (2) in the system (1) according to the invention is a unit which contains different hardwares with which the users interact.
  • the user interface unit (2) is, at its most basic, a device such as a mobile device, a computer, and a tablet that runs an application for an interview simulation.
  • the interview information received from the server (3) is displayed to the user (K) by means of a specific interface and transmitted to the server (3) the response preferences formed by the user (K).
  • the user interface unit (2) is configured to allow the user (K) to generate input in voice or text format in response to the displayed interview information.
  • the user interface unit (2) includes equipment such as a microphone for voice input.
  • the user interface unit (2) includes equipment such as a camera that allows to take the necessary images for the behavior analysis of the user (K).
  • the interface unit (2) that receives the images transmits the relevant image data to the server (3) for analysis.
  • the user interface unit (2) comprises depth-and movement sensing devices (sensors) in addition to image receiving devices and transmits the data received from the respective sensors to the server (3) to measure the behavior of the user K.
  • sensors depth-and movement sensing devices
  • the user interface unit (2) contains the hardware necessary to create a virtual reality image for the user (K).
  • the user interface unit (2) provides a display of the data received from the server (3) to the user (K) via the virtual reality data such as VR spectacles.
  • the user interface unit (2) transmits the received data from the different hardware contained therein to the server (3) using specific protocols.
  • the server (3) in the system (1) is in communication with the user interface unit (2); and it is the unit that provides the necessary data exchange for the interview simulation.
  • the server (3) transmits to the user interface unit (2) the data received from the interview information database (4) and the virtual interview unit (7) in accordance with the specific definitions.
  • the server (3) receives the necessary data via communicating with the related units in accordance with the preferential inputs formed by the user (K) to the data shown to the user (K), and transmits them to the user interface unit (2).
  • the server (3) receives the data formed by the visual and audio hardware comprised in the user interface unit (2) and transmits them to the language processing unit (5) and the behavior analysis unit (6).
  • the server (3) makes a request to the virtual interview unit (7) in accordance with the virtual environment preferences of the user (K) and receives the relevant environment data and transmits it to the user interface unit (2).
  • the server (3) communicates to the user interface unit (2) the data it receives from the virtual interview unit (7) and the relevant data it receives from the interview information database (4) in combined form.
  • the server (3) transmits to the reporting unit (8) information such as profile, resume information, etc., which the user (K) has input via the user interface unit (2).
  • the server (3) transmits the request to the social media integration unit (9) and directs the response of the social media integration unit (9) to the user interface unit (2).
  • the interview information database (4) is the unit which records the information of the script used for the interview simulation.
  • the interview information database (4) contains data based on sector-based interviews based on possible interviews according to different scenarios.
  • the interview information database (4) records the interview simulation scenarios by rating and allows the user (K) to experience progressively different interview scenarios.
  • the interview information database (4 ) transfers the desired information to the server (3) .
  • the language processing unit (5) in the system (1) is a unit which converts the audio and text formats to each other.
  • the language processing unit (5) converts the user (K) responses in the text format it receives from the user interface unit (2) into audio format.
  • the language processing unit (5) can optionally also convert data in audio form into text format.
  • the language processing unit (5) after performing the conversion of the data transmitted by the server (3), can further transmit back the relevant data (data in the new converted format) to the server (3) if the server (3) makes a request.
  • the language processing unit (5) transmits the converted data to the reporting unit (8) and allows registering of the data together with the user (K) and relevant interview information of the user (K).
  • the behavior analysis unit (6) is a unit for analyzing image and sensor data transmitted by the server (3).
  • the behavior analysis unit (6) detects the emotional state of the user (K) by analyzing the related image and sensor data generated by the hardware comprised in the user interface unit (2) and transmitted to the behavior analysis unit (6) by the server (3).
  • the behavior analysis unit (6) analyzes the image data received from the server (3) using image processing methods and determines the mimicry and gestures of the user (K).
  • the behavior analysis unit (6) performs the comparison by comparing the determined gestures and mimics with the previously recorded reference values.
  • the behavior analysis unit (6) analyzes the user movement data perceived by the sensors comprised in the user interface unit (2) and determines the sensory analysis of the user (K) based on the reference values contained in the behavior analysis unit (6) and transmits the generated data to the reporting unit (8).
  • the virtual interview unit (7) is a unit for storing and evaluating the environment data presented to the user (K) via virtual reality. If the server (3) makes a request, the virtual interview unit (7) transmits the requested media data to the server (3).
  • the virtual interview unit (7) includes virtual reality data of different scenarios and transmits the data of the corresponding scenario to the server (3) in response to a request.
  • the virtual interview unit (7) includes environment data and modeled virtual character data which can be processed by the user interface and shown to the user (K).
  • the data transmitted by the virtual interview unit (7) to the server 3 is combined with the scenario dialogues included in the interview information database (4) to provide a virtual three-dimensional interview environment for the user (K).
  • the virtual interview unit (7) transmits virtual interview data to the user interface unit (2) by providing access to the user interface unit (2) after the server (3) transmits the user's request for an interview simulation.
  • the reporting unit (8) is the unit which registers the information created as a result of the interview simulation.
  • the reporting unit (8) records the information obtained from the server (3), the language processing unit (5), the behavior analysis unit (6) and the virtual interview unit (7) as a result of the processes performed by them.
  • the reporting unit (8) records the data in association with the user (K) profile.
  • the reporting unit (8) allows to examine interview simulations, profiles and resumes of authorized employers over the service provided via a data network.
  • the social media integration unit (9) is the unit providing the necessary integration for the user (K) to share interview information.
  • the social media integration unit (9) which receives the sharing request of the user (K) from the server (3) shares the data transmitted with the sharing request with desired people by connecting to the to the desired social media platform.
  • the social media integration unit (9) implements the respective protocol to establish a link to each platform to integrate into social media platforms.
  • the system (1) of the invention processes are carried out for the users to gain interview experience before the job interview.
  • the user if the user has the user interface unit (2), the user is provided with a connection to the server (3) to gain interview experience according to different scenarios for job interviews. Practical training, progressive interview simulations, and interview-related exams can be conducted in accordance with the requests of the users through the user interface unit (2). Users can experience different job interview scenarios over the virtual environment if the user interface unit (2) contains virtual reality equipment.
  • users can create profiles and self-history information, record them within the system (1), and allow institutions to view relevant information.
  • training can be provided through virtual reality, so that users can be given certificates in a more reliable way by measuring the success in education according to all kinds of behaviors and answers.
  • the system (1) according to invention allows users to gain any kind of experience before a job interview.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Strategic Management (AREA)
  • Data Mining & Analysis (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The present invention relates to a simulation system that allows users to experience a job interview through a user interface such as Virtual Reality. System (1) of the invention comprises; a user interface unit (2), a server (3), an interview information database (4), a language processing unit (5), a behavior analysis unit (6), a virtual interview unit (7), a reporting unit (8) and a social media integration unit (9).

Description

A SYSTEM PROVIDING JOB INTERVIEW EXPERIENCE TO USERS
Technical Field
The present invention relates to a simulation system that allows users to experience a job interview through a user interface such as Virtual Reality. Prior Art
Nowadays, especially corporate companies are very interested in job interviews for recruitment of personnel. Many of the candidates are elected or eliminated during the interview. This situation causes serious problems for candidates that are just getting a start in the business world who provide necessary qualifications but lack the experience of job interviews.
In the known state of the art, candidates apply to courses to prepare for interviews. However, paying the tuition fees can be problematic for candidates who already have financial difficulties. In addition, the candidates have little or no chance of practicing interviews in a direct job interview environment during the courses.
In another widespread use in the known state of the art, candidates are obtaining questions they may encounter during job interviews on the Internet or from applications and services provided on mobile platforms. However, candidates are making an effort for it and are trying to find useful information from within a lot of scattered information. Additionally, the candidates cannot provide sufficient performance even if they know the questions because they cannot experience a real job interview atmosphere and therefore end up stressed in the real occasion. In addition in the state of the art, some digital platforms are available for candidates. However, since these platforms do not provide adequate interfaces, they can not provide an adequate job interview environment for candidates For this reason, there is a need for a system that allows candidates to gain job interview experience by providing a real job interview environment during interview practice.
In the United States patent document US2004186743, which is part pf the prior art, a simulation application for job interviews is disclosed. In said invention, the users conduct a job interview with a virtual character present in the simulation environment and gain interview experience. In said invention, the users can conduct the interview verbally or in written form.
Brief Description of the Invention The object of this invention is to realize a system that allows users to gain experience for job interviews.
Another object of the present invention is to realize a system that allows users to be informed about the process in the job interviews.
Another object of the present invention is to realize a system comprising a scoring mechanism that allows the user's success to be measured in progressively presented interview simulations. Another object of the present invention is to realize a system that provides a social network for interview simulation of users.
Another aim of this invention is to realize a system that allows users to experience any job interview experience intensively with a simulated interface presented through virtual reality. Detailed Description of the Invention
In order to achieve the aim of the present invention, the invention "A System Providing Job Interview Experience to Users" is shown in an attached drawing, wherein said drawing discloses;
Fig. 1 : a schematic block diagram of the system according to present invention.
The parts in the figures are numbered individually and their correspondences are given below.
1. System
2. User interface unit
3. Server
4. Interview information database
5. Language processing unit
6. Behavior analysis unit
7. Virtual Interview unit
8. Reporting unit
9. Social media integration unit
K: User
The system (1) providing job interview experience to users via virtual reality interface according to present invention comprises;
at least one user interface unit (2) for interacting with a user (K), providing a virtual interview environment for the user (K) and measuring the gestures and behaviors of the user (K)
at least one server (3) which receives information of the user (K) from the user interface unit (2) and transmits to the user interface unit (2) the virtual environment and the interview information at least one interview information database (4) for registering job interview content information including different scenarios for different sector based job interviews, at least one language processing unit (5) for receiving, from the server (3), the answers formed by the user (K) in voice or text format via the hardware contained in the user interface unit (2) and optionally, providing conversion of text to voice and conversion of voice to text,
at least one behavior analysis unit (6) for analyzing the images of the user (K) received by means of the hardware contained in the user interface unit (2), interpreting the gestures and mimics of the user (K) in accordance with the previously defined rules included in the unit
at least one virtual interview unit (7) for presenting the virtual interview environment of the user (K) in communication with the respective units, the user interface unit (2) comprising data necessary for the virtual reality environment of the hardware contained in the user interface unit (2)
- a reporting unit (8), which receives, stores and shares the evaluation data with the units it is in contact with, wherein the evaluation data are generated, by using the data of the language processing unit (5), the behavior analysis unit (6) and the virtual interviewing unit (7),
at least one social media integration unit (9) that integrates with social media platforms so that the users (K) can share their job interview experience results
.Figure- 1.
The user interface unit (2) in the system (1) according to the invention is a unit which contains different hardwares with which the users interact. The user interface unit (2) is, at its most basic, a device such as a mobile device, a computer, and a tablet that runs an application for an interview simulation. In the most basic embodiment of the user interface unit (2), the interview information received from the server (3) is displayed to the user (K) by means of a specific interface and transmitted to the server (3) the response preferences formed by the user (K). The user interface unit (2) is configured to allow the user (K) to generate input in voice or text format in response to the displayed interview information. The user interface unit (2) includes equipment such as a microphone for voice input. The user interface unit (2) includes equipment such as a camera that allows to take the necessary images for the behavior analysis of the user (K). The interface unit (2) that receives the images transmits the relevant image data to the server (3) for analysis.
In one embodiment of the invention, the user interface unit (2) comprises depth-and movement sensing devices (sensors) in addition to image receiving devices and transmits the data received from the respective sensors to the server (3) to measure the behavior of the user K.
The user interface unit (2) contains the hardware necessary to create a virtual reality image for the user (K). The user interface unit (2) provides a display of the data received from the server (3) to the user (K) via the virtual reality data such as VR spectacles. The user interface unit (2) transmits the received data from the different hardware contained therein to the server (3) using specific protocols. In the present invention, the server (3) in the system (1) is in communication with the user interface unit (2); and it is the unit that provides the necessary data exchange for the interview simulation. When the user (K) requests an interview simulation via the user interface unit (2), the server (3) transmits to the user interface unit (2) the data received from the interview information database (4) and the virtual interview unit (7) in accordance with the specific definitions. The server (3) receives the necessary data via communicating with the related units in accordance with the preferential inputs formed by the user (K) to the data shown to the user (K), and transmits them to the user interface unit (2). When the user (K) initiates the interview simulation, the server (3) receives the data formed by the visual and audio hardware comprised in the user interface unit (2) and transmits them to the language processing unit (5) and the behavior analysis unit (6). The server (3) makes a request to the virtual interview unit (7) in accordance with the virtual environment preferences of the user (K) and receives the relevant environment data and transmits it to the user interface unit (2). In one embodiment of the invention, the server (3) communicates to the user interface unit (2) the data it receives from the virtual interview unit (7) and the relevant data it receives from the interview information database (4) in combined form.
In one embodiment of the invention, the server (3) transmits to the reporting unit (8) information such as profile, resume information, etc., which the user (K) has input via the user interface unit (2).
When the user (K) forms a request to share the results of an interview in social platforms, the server (3) transmits the request to the social media integration unit (9) and directs the response of the social media integration unit (9) to the user interface unit (2). In the system (1) according to invention, the interview information database (4) is the unit which records the information of the script used for the interview simulation. The interview information database (4) contains data based on sector-based interviews based on possible interviews according to different scenarios. In one embodiment of the invention, the interview information database (4) records the interview simulation scenarios by rating and allows the user (K) to experience progressively different interview scenarios. When the server (3) requests certain information, the interview information database (4 ) transfers the desired information to the server (3) .
In the present invention, the language processing unit (5) in the system (1) is a unit which converts the audio and text formats to each other. The language processing unit (5) converts the user (K) responses in the text format it receives from the user interface unit (2) into audio format. The language processing unit (5) can optionally also convert data in audio form into text format. The language processing unit (5), after performing the conversion of the data transmitted by the server (3), can further transmit back the relevant data (data in the new converted format) to the server (3) if the server (3) makes a request. The language processing unit (5) transmits the converted data to the reporting unit (8) and allows registering of the data together with the user (K) and relevant interview information of the user (K).
In the system (1) of the present invention, the behavior analysis unit (6) is a unit for analyzing image and sensor data transmitted by the server (3). The behavior analysis unit (6) detects the emotional state of the user (K) by analyzing the related image and sensor data generated by the hardware comprised in the user interface unit (2) and transmitted to the behavior analysis unit (6) by the server (3). The behavior analysis unit (6) analyzes the image data received from the server (3) using image processing methods and determines the mimicry and gestures of the user (K). The behavior analysis unit (6) performs the comparison by comparing the determined gestures and mimics with the previously recorded reference values.
The behavior analysis unit (6) analyzes the user movement data perceived by the sensors comprised in the user interface unit (2) and determines the sensory analysis of the user (K) based on the reference values contained in the behavior analysis unit (6) and transmits the generated data to the reporting unit (8).
In the system ( 1 ) according to present invention, the virtual interview unit (7) is a unit for storing and evaluating the environment data presented to the user (K) via virtual reality. If the server (3) makes a request, the virtual interview unit (7) transmits the requested media data to the server (3). The virtual interview unit (7) includes virtual reality data of different scenarios and transmits the data of the corresponding scenario to the server (3) in response to a request. The virtual interview unit (7) includes environment data and modeled virtual character data which can be processed by the user interface and shown to the user (K). The data transmitted by the virtual interview unit (7) to the server 3 is combined with the scenario dialogues included in the interview information database (4) to provide a virtual three-dimensional interview environment for the user (K). In one embodiment of the invention, the virtual interview unit (7) transmits virtual interview data to the user interface unit (2) by providing access to the user interface unit (2) after the server (3) transmits the user's request for an interview simulation.
In the system (1) according to invention, the reporting unit (8) is the unit which registers the information created as a result of the interview simulation. The reporting unit (8) records the information obtained from the server (3), the language processing unit (5), the behavior analysis unit (6) and the virtual interview unit (7) as a result of the processes performed by them. The reporting unit (8) records the data in association with the user (K) profile.
In one embodiment of the invention, the reporting unit (8) allows to examine interview simulations, profiles and resumes of authorized employers over the service provided via a data network.
In the system of the invention (1), the social media integration unit (9) is the unit providing the necessary integration for the user (K) to share interview information. The social media integration unit (9) which receives the sharing request of the user (K) from the server (3) shares the data transmitted with the sharing request with desired people by connecting to the to the desired social media platform. The social media integration unit (9) implements the respective protocol to establish a link to each platform to integrate into social media platforms.
With the system (1) of the invention processes are carried out for the users to gain interview experience before the job interview. In the system (1), if the user has the user interface unit (2), the user is provided with a connection to the server (3) to gain interview experience according to different scenarios for job interviews. Practical training, progressive interview simulations, and interview-related exams can be conducted in accordance with the requests of the users through the user interface unit (2). Users can experience different job interview scenarios over the virtual environment if the user interface unit (2) contains virtual reality equipment. In addition to this, it is also possible for contracted institutions to prepare the interview content for the users and examine the interview results experienced by the users. In addition to training and interview simulations, users can create profiles and self-history information, record them within the system (1), and allow institutions to view relevant information. In accordance with the present invention, training can be provided through virtual reality, so that users can be given certificates in a more reliable way by measuring the success in education according to all kinds of behaviors and answers. The system (1) according to invention, allows users to gain any kind of experience before a job interview.
It is possible to develop a wide variety of applications of the system (1) that is the subject of the present invention, and the invention is not limited to the examples disclosed herein, but is essentially as set forth in the claims.

Claims

1. A system (1) for providing business interview experience through virtual reality interface to users wherein said system is characterized in that it comprises;
- at least one user interface unit (2) for interacting with a user (K), providing a virtual interview environment for the user (K) and measuring the gestures and behaviors of the user (K),
at least one server (3) which receives information of the user (K) from the user interface unit (2) and transmits to the user interface unit (2) the virtual environment and the interview information,
at least one interview information database (4) for registering job interview content information including different scenarios for different sector based job interviews,
at least one language processing unit (5) for receiving, from the server (3), the answers formed by the user (K) in voice or text format via the hardware contained in the user interface unit (2) and optionally, providing conversion of text to voice and conversion of voice to text,
at least one behavior analysis unit (6) in communication with the server (3) and the reporting unit (8),
- at least one virtual interview unit (7) in communication with the server (3),
a reporting unit (8), which receives, stores and shares the evaluation data with the units it is in contact with, wherein the evaluation data are generated, by using the data of the language processing unit (5), the behavior analysis unit (6) and the virtual interviewing unit (7),
- at least one social media integration unit (9) that integrates with social media platforms so that the users (K) can share their job interview experience results, a server (3) managing the necessary data traffic between the language processing unit (5), the behavior analysis unit (6), the virtual interface unit (7) and the user interface unit (2) to form the virtual interview environment in the user interface unit (2), at least one behavior analysis unit (6) for analyzing the images of the user (K) received by means of the hardware contained in the user interface unit (2), interpreting the gestures and mimics of the user (K) in accordance with the previously defined rules included in the unit,
- at least one virtual interview unit (7) comprising the data necessary for formation of the virtual reality environment of the hardware contained in the user interface unit (2).
2. A system (1) according to claim 1, characterized in that, the user interface unit (2) displays the interview information received from the server (3) to the user (K) by means of a specific interface and transmits the response preferences formed by the user (K) to the server (3).
3. A system (1) according to claim 1 or claim 2, characterized in that the user interface unit (2) is structured to allow the user (K) to create input in the form of a voice or text.
4. A system (1) according to any of the preceding claims characterized in that the user interface unit (2) receives user images and communicates the respective image data to the server (3) for analysis.
5. A system (1) according to claim 4, characterized in that the user interface unit (2) comprises sensors sensing depth and movement and transmits to the server (3) the data received from the respective sensors to measure the behavior of the user (K).
6. A system (1) according to claim 5, characterized in that the user interface unit (2) provides a display of the data received from the server (3) to the user (K) as virtual reality data via hardware providing virtual environment.
7. A system (1) according to any of the preceding claims, characterized in that the server (3) receives the necessary data via communicating with the related units in accordance with the preferential inputs formed by the user (K) to the data shown to the user (K), and transmits them to the user interface unit (2).
8. A system (1) according to claim 7, characterized in that the server (3) makes a request to the virtual interview unit (7) in accordance with the virtual environment preferences of the user (K) and receives the relevant environment data and transmits it to the user interface unit (2).
9. A system according to claim 7 or claim 8 characterized in that the server (3) transmits the request to the social media integration unit (9) and directs the response of the social media integration unit (9) to the user interface unit (2) when the user (K) forms a request to share the results of an interview in social platforms.
10. A system ( 1 ) according to claim 1 , characterized in that interview information database (4) registers and records interview simulation scenarios and allows the user
(K) to experience progressively different interview scenarios.
11. A system (1) according to claim 1, characterized in that the language processing unit (5), after performing the conversion of the data transmitted by the server (3), can further transmit back the relevant data (data in the new converted format) to the server (3) if the server (3) makes a request.
12. A system (1) according to Claim 1, characterized in that the behavior analysis unit (6) detects the emotional state of the user (K) by analyzing the related image and sensor data generated by the hardware comprised in the user interface unit (2) and transmitted to the behavior analysis unit (6) by the server (3).
13. A system (1) according to claim 12, characterized in that the behavior analysis unit (6) performs the comparison by comparing the determined gestures and mimics with the previously recorded reference values.
14. A system (1) according to claim 13 characterized in that the behavior analysis unit (6) analyzes the user movement data perceived by the sensors comprised in the user interface unit (2) and determines the sensory analysis of the user (K) based on the reference values contained in the behavior analysis unit (6) and transmits the generated data to the reporting unit (8).
15. A system (1) according to Claim 1, characterized in that the virtual interview unit
(7) comprises virtual reality data of different interview scenarios and transmits the data of the corresponding scenario to the server (3) on request.
16. A system (1) according to claim 15, characterized in that the virtual interaction unit (7) comprises environment data and modeled virtual character data which can be processed by the user interface and shown to the user (K).
17. A system (1) according to Claim 16, characterized in that the virtual interview unit (7) transmits virtual interview data to the user interface unit (2) by providing access to the user interface unit (2) after the server (3) transmits the user's request for an interview simulation.
18. A system (1) according to Claim 1, characterized in that the reporting unit (8) allows to examine interview simulations, profiles and resumes of authorized employers over the service provided via a data network.
PCT/TR2018/050167 2017-04-14 2018-04-14 A system providing job interview experience to users WO2019004971A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TR2017/05553A TR201705553A2 (en) 2017-04-14 2017-04-14 A SYSTEM THAT ENABLES USERS TO EXPERIENCE BUSINESS INTERVIEWS
TRTR2017/05553 2017-04-14

Publications (2)

Publication Number Publication Date
WO2019004971A2 true WO2019004971A2 (en) 2019-01-03
WO2019004971A3 WO2019004971A3 (en) 2019-03-28

Family

ID=64741792

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/TR2018/050167 WO2019004971A2 (en) 2017-04-14 2018-04-14 A system providing job interview experience to users

Country Status (2)

Country Link
TR (1) TR201705553A2 (en)
WO (1) WO2019004971A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110517012A (en) * 2019-08-09 2019-11-29 福建路阳信息科技有限公司 A kind of campus recruiting management system
CN116091014A (en) * 2023-03-07 2023-05-09 安徽智享云科技有限公司 Human resource interview system based on multi-mode identification

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040186743A1 (en) * 2003-01-27 2004-09-23 Angel Cordero System, method and software for individuals to experience an interview simulation and to develop career and interview skills
US8144148B2 (en) * 2007-02-08 2012-03-27 Edge 3 Technologies Llc Method and system for vision-based interaction in a virtual environment
US10120413B2 (en) * 2014-09-11 2018-11-06 Interaxon Inc. System and method for enhanced training using a virtual reality environment and bio-signal data

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110517012A (en) * 2019-08-09 2019-11-29 福建路阳信息科技有限公司 A kind of campus recruiting management system
CN116091014A (en) * 2023-03-07 2023-05-09 安徽智享云科技有限公司 Human resource interview system based on multi-mode identification
CN116091014B (en) * 2023-03-07 2023-08-25 安徽智享云科技有限公司 Human resource interview system based on multi-mode identification

Also Published As

Publication number Publication date
TR201705553A2 (en) 2018-10-22
WO2019004971A3 (en) 2019-03-28

Similar Documents

Publication Publication Date Title
US10643166B2 (en) Automated registration and greeting process—custom queueing(accommodations)
Bang et al. Confidence matching in group decision-making
Turner et al. The role of practitioner self-efficacy, training, program and workplace factors on the implementation of an evidence-based parenting intervention in primary care
CA3065905A1 (en) Method of preparing recommendations for taking decisions on the basis of a computerized assessment of the capabilities of users
JP4631014B2 (en) Electronic teaching material learning support device, electronic teaching material learning support system, electronic teaching material learning support method, and electronic learning support program
US12001434B2 (en) Method for monitoring user behavior when interacting with content and a system for its implementation
CN106296505A (en) Educational system information interacting method, device and system towards multiple objects
Feese et al. Sensing spatial and temporal coordination in teams using the smartphone
WO2021146317A1 (en) Threat assessment and response facilitation system and method
Hughes et al. Managing autism: Knowledge and training in autism spectrum disorders among special education administrators in Texas
US20150339937A1 (en) Methods and systems for testing with test booklets and electronic devices
WO2019004971A2 (en) A system providing job interview experience to users
US20180197166A1 (en) System and method for using user rating in real-world data observation campaign
Bower et al. Perceived utility and feasibility of wearable technologies in higher education
JP7111223B2 (en) Learning support device and program
Silva et al. Measuring expert and novice performance within computer security incident response teams
Sarita et al. Accessibility of healthcare sites: evaluation by automated tools
JP4004218B2 (en) Education support system and target presentation method
US20150242802A1 (en) Distributed learning system and method for presenting a sales pitch
JP6900736B2 (en) Learning support devices and programs
Bester et al. Strengthening the psychological contract through talent-enabled assessment journeys: An employee-experience guide
KR102233665B1 (en) Method for recruiting by using job application mobile app
World Health Organization Western Pacific regional action plan for dengue prevention and control (2016)
KR20200013451A (en) Method for question investigation
Johnson et al. Teaching strategies in software engineering towards industry interview preparedness

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18822864

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18822864

Country of ref document: EP

Kind code of ref document: A2