Increasing Trustworthiness of Face Authentication in Mobile Devices by Modeling Gesture Behavior and Location Using Neural Networks
Blerim Rexha,
Gresa Shala and
Valon Xhafa
Additional contact information
Blerim Rexha: Faculty of Electrical and Computer Engineering, University of Prishtina, Kodra e Diellit p.n., 10000 Prishtina, Kosovo
Gresa Shala: Department of Computer Science, Freiburg University, Georges-Köhler Alley 101, 79110 Freiburg im Breisgau, Germany
Valon Xhafa: Department of Informatics, Technical University of Munich, Boltzmannstraße 3, 85748 Garching bei München, Germany
Future Internet, 2018, vol. 10, issue 2, 1-17
Abstract:
Personal mobile devices currently have access to a significant portion of their user’s private sensitive data and are increasingly used for processing mobile payments. Consequently, securing access to these mobile devices is a requirement for securing access to the sensitive data and potentially costly services. Face authentication is one of the promising biometrics-based user authentication mechanisms that has been widely available in this era of mobile computing. With a built-in camera capability on smartphones, tablets, and laptops, face authentication provides an attractive alternative of legacy passwords for its memory-less authentication process, which is so sophisticated that it can unlock the device faster than a fingerprint. Nevertheless, face authentication in the context of smartphones has proven to be vulnerable to attacks. In most current implementations, a sufficiently high-resolution face image displayed on another mobile device will be enough to circumvent security measures and bypass the authentication process. In order to prevent such bypass attacks, gesture recognition together with location is proposed to be additionally modeled. Gestures provide a faster and more convenient method of authentication compared to a complex password. The focus of this paper is to build a secure authentication system with face, location and gesture recognition as components. User gestures and location data are a sequence of time series; therefore, in this paper we propose to use unsupervised learning in the long short-term memory recurrent neural network to actively learn to recognize, group and discriminate user gestures and location. Moreover, a clustering-based technique is also implemented for recognizing gestures and location.
Keywords: authentication; face; smartphones; gestures; location; LSTM; neural network (search for similar items in EconPapers)
JEL-codes: O3 (search for similar items in EconPapers)
Date: 2018
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/1999-5903/10/2/17/pdf (application/pdf)
https://www.mdpi.com/1999-5903/10/2/17/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jftint:v:10:y:2018:i:2:p:17-:d:130286
Access Statistics for this article
Future Internet is currently edited by Ms. Grace You
More articles in Future Internet from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().