In technical support scams, cybercriminals attempt to convince users that their machines are infected with malware and are in need of their technical support. In this process, the victims are asked to provide scammers with remote access to their machines, who will then "diagnose the problem", before offering their support services which typically cost hundreds of dollars. Despite their conceptual simplicity, technical support scams are responsible for yearly losses of tens of millions of dollars from everyday users of the web.

In this talk, we report on the first systematic study of technical support scams and the call centers hidden behind them. We identify malvertising as a major culprit for exposing users to technical support scams and use it to build an automated system capable of discovering, on a weekly basis, hundreds of phone numbers and domains operated by scammers. By allowing our system to run for more than 8 months we collect a large corpus of technical support scams and use it to provide insights on their prevalence, the abused infrastructure, the illicit profits, and the current evasion attempts of scammers. Finally, by setting up a controlled, IRB-approved, experiment where we interact with 60 different scammers, we experience first-hand their social engineering tactics, while collecting detailed statistics of the entire process. We explain how our findings can be used by law-enforcing agencies and propose technical and educational countermeasures for helping users avoid being victimized by technical support scams.

Dr. Nick Nikiforakis (PhD'13) is an Assistant Professor in the Department of Computer Science at Stony Brook University. He is the director of the PragSec lab where students conduct research in all aspects of pragmatic security and privacy including web tracking, mobile security, DNS abuse, social engineering, and cyber crime. He has authored more than 40 academic papers and his work often finds its way to the popular press including TheRegister, SlashDot, BBC, and Wired. His research is supported by the National Science Foundation and the Office of Naval Research and he regularly serves in the Program Committees of all top-tier security conferences.

Researchers in cybersecurity often face two conundrums: 1) it is hard to find real-world problems that are interesting to researchers; 2) it is hard to transition cybersecurity research results into practical use. In this talk I will discuss how we overcome these two obstacles in our four-year and still on-going effort of using anthropological approach to study cybersecurity operations.

The frequent news report on breaches from well-funded organizations show a pressing need to improve security operations. However there had been very little academic research into the problem. Since most of the cyber defense tasks involve humans --- security analysts, it is natural to adopt a human-centric approach to study the problem. Unlike most of the usable security research that has flourished in the recent years, it is extremely difficult to conduct research about security analysts in the usual way such as through surveys and interviews.

Security Operation Centers (SOCs) have a culture of secrecy. It is extremely difficult for researchers to gain the trust from analysts and discover how they really do their job. As a result research that could benefit security operations are often conducted based on assumptions that do not hold in the real world.  We overcome this hurdle by adopting the well-established research method from social and cultural anthropology -- long-term participant observation. Multiple PhD and undergraduate students in computer science were trained by an anthropologist in this research method, and were embedded in SOCs of both academic institutions and corporations. By becoming one of the subjects we want to study, we perform reflection and reconstruction to gain the "native point of view" of security analysts. Through four years (and still on-going) fieldwork in two academic and three corporate SOCs, we collected large amounts of data in the form of field notes.

After systematically analyzing the data using qualitative methods widely used in social science research, such as grounded theory and template analysis, we uncovered major findings that explain the burnout phenomena in the SOCs. We further found that the Activity Theory framework formulated by Engestroem provides a deep explanation of the many conflicts we found in an SOC environment that cause inefficiency, and offers insights into how to turn those contradictions into opportunities for innovation to improve operational efficiency.

Finally, in the most recent SOC fieldwork, we were able to achieve our initial goal of conducting this anthropological research -- designing effective technologies for security operations that were taken up by the analysts and improved their work efficiency.

About the Speaker.

The Domain Name System (DNS) is a critical component of the Internet. The critical nature of DNS often makes it the target of direct cyber-attacks and other forms of abuse. Cyber-criminals rely heavily upon the reliability and scalability of the DNS protocol to serve as an agile platform for their illicit network operations. For example, modern malware and Internet fraud techniques rely upon the DNS to locate their remote command-and-control (C&C) servers through which new commands from the attacker are issued, serve as exfiltration points for the information stolen from the victim's computer and to manage subsequent updates to their malicious toolset.

In this talk I will discuss how we can reason about Internet abuse using DNS. First I will argue why the algorithmic quantification of DNS reputation and trust is fundamental for understanding the security of our Internet communications. Then, I will examine how DNS traffic relates to malware communications. Among other things, we will reason about data-driven methods that can be used to reliably detect malware communications that employ Domain Name Generation Algorithms (DGAs) --- even in the complete absence of the malware sample. Finally, I will conclude my talk by proving a five year overview of malware network communications. Through this study we will see that (as network security researchers and practitioners) we are still approaching the very simple detection problems fundamentally in the wrong way.

Dr. Manos Antonakakis (PhD’12) is an Assistant Professor in the School of Electrical and Computer Engineering (ECE), and adjunct faculty in the College of Computing (CoC), at the Georgia Institute of Technology. He is responsible for the Astrolavos Lab, where students conduct research in the areas of Attack Attribution, Network Security and Privacy, Intrusion Detection, and Data Mining. In May 2012, he received his Ph.D. in Computer Science from the Georgia Institute of Technology. Before joining the Georgia Tech ECE faculty ranks, Dr. Antonakakis held the Chief Scientist role at Damballa, where he was responsible for advanced research projects, university collaborations, and technology transfer efforts. He currently serves as the co-chair of the Academic Committee for the Messaging Anti-Abuse Working Group (MAAWG). Since he joined Georgia Tech in 2014, Dr. Antonakakis raised more than $20M in research funding as Primary Investigator from government agencies and the private sector. Dr. Antonakakis is the author of several U.S. patents and academic publications in top academic conferences.

Despite soaring investments in IT infrastructure, the state of operational network security continues to be abysmal. We argue that this is because existing enterprise security approaches fundamentally lack precision in one or more dimensions: (1) isolation to ensure that the enforcement mechanism does not induce interference across different principals; (2) context to customize policies for different devices; and (3) agility to rapidly change the security posture in response to events. To address these shortcomings, we present PSI, a new enterprise network security architecture that addresses these pain points. PSI enables fine-grained and dynamic security postures for different network devices. These are implemented in isolated enclaves and thus provides precise instrumentation on these above dimensions by construction. To this end, PSI leverages recent advances in software-defined networking (SDN) and network functions virtualization (NFV). We design expressive policy abstractions and scalable orchestration mechanisms to implement the security postures. We implement PSI using an industry-grade SDN controller (OpenDaylight) and integrate several commonly used enforcement tools (e.g., Snort, Bro, Squid). We show that PSI is scalable and is an enabler for new detection and prevention capabilities that would be difficult to realize with existing solutions.

This is a practice talk for Tianlong’s talk at NDSS 2016.

Tianlong Yu is a third year PhD student in CyLab, advised by Prof. Vyas Sekar and Prof. Srini Seshan. His research focuses on extending Software Defined Networking (SDN) and Network Function Virtualization (NFV) to provide customized, dynamic and isolated policy enforcement for critical assets in network. More broadly, his research covers enterprise network security and Internet-of-Things network security.

In today’s data-centric economy issues of privacy are becoming increasingly complex to manage. This is true for users who are often feeling helpless when it comes to understanding and managing the many different ways in which their data can be collected and used. But it is also true for developers, service providers, app store operators and regulators.  A significant source of frustration has been the lack of progress in formalizing the disclosure of data collection and use practices. These disclosures today continue to primarily take the form of long privacy policies, which very few people actually read.

What if computers could actually understand the text of privacy policies? In this talk, I will report on our progress developing techniques to do just that and will discuss the development and piloting of tools that build on these technologies. This includes an overview of a compliance tool for mobile apps. The tool automatically analyzes the code of apps and compares its findings with disclosures made in the text of privacy policies to identify potential compliance violations. I will report on a study of about 18,000 Android apps. Results of the study suggest that compliance issues are widespread.

In the second part of this talk, I will discuss how using machine learning we can also build models of people’s privacy preferences and help them manage their privacy settings. This will include an overview of our work on Personalized Privacy Assistants. These assistants are intended to selectively notify their users about data collection and use practices they may find egregious and are also capable of helping their users configure available privacy settings. We will review results of a pilot involving one such assistant developed to help users manage their mobile app permissions. I will conclude with a discussion of ongoing work to extend this functionality in the context of Internet of Things scenarios.

Norman M. Sadeh is a Professor in the School of Computer Science at Carnegie Mellon University (CMU) and a faculty at CyLab. He is director of CMU’s Mobile Commerce Laboratory and co-Director of the MSIT Program in Privacy Engineering. He also co-founded the School of Computer Science ’s PhD Program in Societal Computing (formerly  Computation, Organizations and Society). His primary research interests are in the area of mobile  computing, the Internet of Things, cybersecurity, online privacy, user-oriented machine learning, human computer interaction and artificial intelligence. His research has been credited with influencing the design and development of a number of commercial products well as activities at the US Federal Trade Commission the California Office of the Attorney General.

Between 2008 and 2011, Norman served as founding CEO of Wombat Security Technologies, a leading provider of SaaS cybersecurity training products and anti-phishing solutions originally developed as part of research with several of his colleagues at CMU. As chairman of the board and chief scientist, Norman remains actively involved in the company, working closely with the management team on both business and technology strategies. Among other activities, Norman currently leads two of the largest domestic research projects in privacy, an NSF SaTC Frontier project on Usable Privacy Policies and a project on Personalized Privacy Assistants funded by the DARPA Brandeis initiative, the National Science Foundation and Google’s IoT Expedition.

In the late nineties, Norman was program manager with the European Commission’s ESPRIT research program, prior to serving for two years as Chief Scientist of its US $600M (EUR 550M) initiative in New Methods of Work and eCommerce within the Information Society Technologies (IST) program. As such, he was responsible for shaping European research priorities in collaboration with industry and universities across Europe. These activities eventually resulted in the launch of over 200 R&D projects involving over 1,000 European organizations from industry and research. While at the Commission, Norman also contributed to a number of EU policy initiatives related to eCommerce, the Internet, cybersecurity, privacy and entrepreneurship.

To comply with 1990s-era US export restrictions on cryptography, early versions of SSL/TLS supported reduced-strength ciphersuites that were restricted to 40-bit symmetric keys and 512-bit RSA and Diffie-Hellman public values.  Although the relevant export restrictions have not been in effect since 2000, modern implementations often maintain support for these cipher suites along with old protocol versions.

In this talk, I will discuss recent attacks against TLS (FREAK, Logjam, and DROWN) demonstrating how server-side support for these insecure ciphersuites harms the security of users with modern TLS clients.  These attacks exploit a combination of clever cryptanalysis, advances in computing power since the 1990s, previously undiscovered protocol flaws, and implementation vulnerabilities.

Nadia Heninger is an assistant professor in the Computer and Information Science department at the University of Pennsylvania. Her research focuses on security, applied cryptography, and algorithms. Previously, she was an NSF Mathematical Sciences Postdoctoral Fellow at UC San Diego and a visiting researcher at Microsoft Research New England. She received her Ph.D. in computer science in 2011 from Princeton and a B.S. in electrical engineering and computer science in 2004 from UC Berkeley.

Using the Internet is a risky venture: cybercriminals could be lurking behind any email or in any web page, just waiting to compromise your machine.  Practicing and researching cybersecurity is about minimizing that risk.

Unfortunately, modern cybercriminals don't compromise  machines just because they can - they do it to make money or steal data. Likewise, the risks that end users care about aren't measured in vulnerabilities discovered or hosts compromised, they care about losing hard earned money, embarrassing pictures, or simply a night of their free time because they had to remove malware from the family computer. Cybersecurity research should minimize the chance of successful  attacks by maximizing the number of vulnerabilities patched or infiltrations thwarted. However, these technical goals are fundamentally intermediate goals: the ultimate goal of cybersecurity is to minimize the amount of harm that comes to users, which is a quantity denominated in dollars lost, days spent recovering from attacks, or data lost to attackers. By quantifying the harm of these attacks in these meaningful quantities,  we can focus defenses and mitigations on the attacks that cause the most harm to the Internet's users.

This talk will highlight recent results that improve our understanding the true cost of cybersecurity events and the benefits of its enablers.  I'll also show how these results can lead to actionable insights into which attacks we should be spending our finite effort combating. I'll cover losses due to affiliate fraud, measured in profits lost, both by the platforms and legitimate marketers. I'll also cover losses incurred due to typosquatting: while typosquatting is perpetrated by thousands upon thousands of domains, the harm caused is not clear. Furthermore, I'll explain some of our results looking at how features in modern browsers benefit end users.  Finally, I'll showcase a tool which quantifies the value of a user's private data (their account logins), which can motivate better security behavior through a personalized warning regarding how much their account might be worth to cybercriminals.

We are clearly moving toward an Internet where encryption is ubiquitous—by some estimates, more than half of all Web traffic is HTTPS, and the number is growing. This is a win in terms of privacy and security, but it comes at the cost of functionality and performance, since encryption blinds middleboxes (devices like intrusion detection systems or web caches that process traffic in the network). In this talk I will describe two recent and ongoing projects exploring techniques for including middleboxes in secure sessions in a controlled manner. The first is a protocol, developed in collaboration with Telefónica Research and called Multi-Context TLS (mcTLS), that adds access control to TLS so that middleboxes can be added to a TLS session with restricted permissions. The second, which is ongoing work with Microsoft Research, explores bringing trusted computing technologies like Intel SGX to network middleboxes.

David Naylor is a sixth year Ph.D. student at Carnegie Mellon University, where he's advised by Peter Steenkiste. His primary research interests are computer networking, security, and privacy, but he’s also interested in Web measurement and performance . David is currently on the academic job market.

Algorithms in nature are simple and elegant yet ultimately sophisticated. All behaviors are connected to the basic instincts we take for granted. The biomorphic approach attempts to connect artificial intelligence to primitive intelligence. It explores the idea that a genuinely intelligent computers will be able to interact naturally with humans. To form the bridge, computers need the ability to recognize, understand, and even have instincts similar to living creatures. In this talk, I will introduce the theoretical models in my new book "Instinctive Computing" and a few real-world applications, including visual analytics of dynamic patterns of malware spreading, SQL and DDOS attacks, IoT data analysis in a smart building, speaker verification on mobile phones, privacy algorithms for the microwave imaging in airports and the privacy-aware smart windows for the autonomous light-rail transit vehicles in downtown Singapore.

Dr. Yang Cai is Senior Systems Scientist of CyLab, Director of Visual Intelligence Studio, and Associate Professor of Biomedical Engineering, Carnegie Mellon University, Pittsburgh. His research interests include steganography, machine intelligence, video analytics, interactive visualization of Big Data, biomorphic algorithms, medical imaging systems, and visual privacy algorithms. He has published 6 books including a new monograph "Instinctive Computing" (Springer, 2016) and a textbook "Ambient Diagnostics" (CRC, 2014). He also taught courses "Image, Video and Multimedia" (ECE18-798), "Cognitive Video" (ECE18-799K), “Clinical Practicum” (BME 42-790), "Human Algorithms (Fine Art 06-427), “Innovation Process” (HCI 05-899C), and the University-Wide Course "Creativity" (99-428). He was a Research Fellow in Studio for Creative Inquiry at School of Art and exhibited his artwork in Italy and U.S. He has been a volunteer scientist of 3D imaging at the archeology field school in Alps for ten years

Jam resistance for omnidirectional wireless networks is an important problem. Existing jam-resistant systems use a secret spreading sequence or secret hop sequence, or some other information that must be kept secret from the jammer. BBC coding is revolutionary in that it achieves jam resistance without any shared secret. BBC requires the use of a hash function that is fast and secure, but “secure” in a different sense than for standard cryptographic hashes. We present a potential hash function: Glowworm. For incremental hashes as used in BBC codes, it can hash a string of arbitrary length in 11 clock cycles. That is not 11 cycles per bit or 11 cycles per byte. That is 11 cycles to hash the entire string, given that the current string being hashed differs from the last in only an addition or deletion of its last bit. An exhaustive security proof has been done for 32 bit Glowworm.

Martin Carlisle is a teaching professor in the Carnegie Mellon University Information Networking Institute and a security researcher in CMU’s CyLab. Previously, he was a computer science professor at the United State Air Force Academy, Director of the Academy Center for Cyberspace Research, and founder and coach of the Air Force Academy Cyber Competition Team. Prof. Carlisle earned a PhD in Computer Science from Princeton University. His research interests include computer security, programming languages and computer science education.

He is the primary author of RAPTOR, an introductory programming environment used in universities and schools around the world.  He founded and coached the Air Force Academy Cyber Competition Team, which advanced four years to the National Collegiate Cyber Defense Competition.  He is an ACM Distinguished Educator, a Colorado Professor of the Year, and a recipient of the Arthur S. Flemming Award for Exceptional Federal Service.


Subscribe to CyLab