Let the weakest link fail, but gracefully: understanding tailored phishing and measures against it
In Eindhoven University of Technology - PhD thesis, 2024
Burda, P.Abstract
Humans play a critical role in computer systems, making them an integral part of their attack surface. Social engineering attacks specifically aim to deceive individuals to gain unauthorized access to sensitive information or deploy malware on their systems. The most common form of social engineering attack is phishing, by which an attacker sends fraudulent messages (typically in the form of emails), claiming to be from a reputable and trusted source. Phishing and, more in general, social engineering attacks exploit inherent vulnerabilities rooted in human cognition, allowing attackers to manipulate system users in executing actions against their own self-interest. Since these vulnerabilities are universal among potential targets and cannot be easily fixed (e.g., by training), they present a consistent and relatively stable attack surface for attackers to exploit. This allows attackers to minimize the complexity and costs associated with deploying malware-based attacks, while still potentially achieving a high impact on the system. Phishing attacks are evolving rapidly and increasing in sophistication: attackers can gather targeted information about their victims and use it to build tailored phishing attacks to further improve attack efficacy. The gathered information, such as contextual information on the targets and their environment, can be used to craft believable pretexts that significantly increase the attack success rates. The variability of attack characteristics (pretext, links) and resemblance to regular communication make most detection attempts and user anti-phishing education largely ineffective. The potential scalability and relatively low effort to deploy a tailored phishing campaign create significant risks for Internet users, organizations, and institutions; historical examples include financial losses, data breaches, and disruption of democratic processes. Because of the multidisciplinary nature of social engineering, there is a lack of a structured and coherent understanding of the complex socio-technical mechanisms that underpin it. As generic, mass phishing is considered the most prevalent form of social engineering attacks, empirical research has so far mainly focused on these 'untargeted' phishing attack scenarios. However, the nuances involved in targeted phishing attacks and the effects of the manipulation of information relevant to the target remain unexplored. Further, existing countermeasures lag behind the evolution of more sophisticated phishing attacks, such as tailored phishing. We thus examine the following main research question: What are the current gaps in our understanding of tailored phishing attacks from the target, attacker, and defender perspectives, and which technological and organizational methods can be employed to address these gaps? To answer this question, we develop a framework to structure and map social ngineering attacks to a high-level representation of relevant human cognitive processes. The framework, grounded on existing well-established cognitive theories, is used to carry out a systematic literature review of the extant empirical research, allowing us to identify gaps in relation to experiment characteristics, core cognitive features, and the exploitable attack surface from the target perspective. We then adopt the attacker's perspective and investigate what techniques can be best exploited in a tailored attack, and their effects on human cognition with a field experiment in two large organizations. This provides insights into the relationship between cognitive exploits, their delivery methods, and the organizational settings. Current countermeasures, such as automated detection and training, might be off-target for such sophisticated attacks. As such, we investigate the defender perspective by exploring technological and organizational mitigation strategies. We develop a novel approach, as a browser extension, to support users in detecting phishing websites by identifying which website a phishing web page is imitating using a mix of automated textual and visual features recognition techniques. The second mitigation approach targets organizational environments whereby user reporting of attacks to the IT department of an organization may be a significant, yet untapped, resource to mitigate advanced campaigns. We employ qualitative and quantitative methods to investigate what influences reporting behavior 1) by interviewing employees targeted in a simulated tailored phishing attack at a small IT company, and 2) by investigating the intention to report as a function of certain human factors. Our findings shed light on the rationale and motivation of users reporting phishing attacks and provide a more comprehensive understanding of traits and attitudes affecting individuals' cyber security behaviors. This carries a series of implications on both theoretical and practical levels that can help organizations to improve their security processes, anti-phishing training, and awareness programs. In the context where the functioning of our society heavily depends on digital communications, this thesis advances social engineering research by identifying, estimating and mitigating the associated risks. We identify open gaps in research by contextualizing social engineering attacks in the cognitive sciences domain. We estimate the potential risks by demonstrating how target-related information in phishing can overrun the effects of conventional phishing. Finally, we mitigate the risks by showing why humans -- the targets of such attacks -- can be the current best defense against, otherwise unstoppable, sophisticated phishing attacks.
Bib