TY - GEN
T1 - Sentiment Classification and Prediction of Job Interview Performance
AU - Alduayj, Sarah S.
AU - Smith, Phillip
PY - 2019/5
Y1 - 2019/5
N2 - Attracting and hiring talented employees is a challenge for companies. The job interview process is a very critical step for both employer and candidate. Having a smooth hiring process in a company will increase future employees' satisfaction. Candidates tend to share their feedback and experience of interviews and company's hiring process with others. Having a negative experience can affect its brand image and reputation as an employer. This will make it hard to attract talented employees. In this research, machine learning and neural network models, such as support vector machines, logistic regression, Naïve Bayes, and long short-term memory (LSTM), were trained to predict the candidates' sentiments after a job interview. Each model was trained using several data representations and weighting approaches, such as term binary, term frequency, and term frequency-inverse document frequency (TF-IDF). As a result, training logistic regression with TF-IDF and unigram word representation achieved an F1-measure of 0.814.
AB - Attracting and hiring talented employees is a challenge for companies. The job interview process is a very critical step for both employer and candidate. Having a smooth hiring process in a company will increase future employees' satisfaction. Candidates tend to share their feedback and experience of interviews and company's hiring process with others. Having a negative experience can affect its brand image and reputation as an employer. This will make it hard to attract talented employees. In this research, machine learning and neural network models, such as support vector machines, logistic regression, Naïve Bayes, and long short-term memory (LSTM), were trained to predict the candidates' sentiments after a job interview. Each model was trained using several data representations and weighting approaches, such as term binary, term frequency, and term frequency-inverse document frequency (TF-IDF). As a result, training logistic regression with TF-IDF and unigram word representation achieved an F1-measure of 0.814.
KW - Continues bag of words
KW - K nearest neighbour
KW - Logistic regression
KW - Long short term memory
KW - Naïve Bayes
KW - Random forest
KW - Skip gram
KW - Support vector machine
UR - http://www.scopus.com/inward/record.url?scp=85073888989&partnerID=8YFLogxK
U2 - 10.1109/CAIS.2019.8769559
DO - 10.1109/CAIS.2019.8769559
M3 - Conference contribution
AN - SCOPUS:85073888989
T3 - 2nd International Conference on Computer Applications and Information Security, ICCAIS 2019
BT - 2nd International Conference on Computer Applications and Information Security, ICCAIS 2019
PB - Institute of Electrical and Electronics Engineers (IEEE)
T2 - 2nd International Conference on Computer Applications and Information Security, ICCAIS 2019
Y2 - 1 May 2019 through 3 May 2019
ER -